0% found this document useful (0 votes)
16 views25 pages

Model QP Answer

The document discusses various software testing concepts, including verification and validation, fault, error, and bug definitions, and mutation testing. It also covers input domain modeling techniques like equivalence class partitioning and boundary value analysis, as well as grey box testing methods. Each section emphasizes the importance of thorough testing practices to ensure software quality and functionality.

Uploaded by

Dj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views25 pages

Model QP Answer

The document discusses various software testing concepts, including verification and validation, fault, error, and bug definitions, and mutation testing. It also covers input domain modeling techniques like equivalence class partitioning and boundary value analysis, as well as grey box testing methods. Each section emphasizes the importance of thorough testing practices to ensure software quality and functionality.

Uploaded by

Dj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Model question paper answers

1.

Validation and verification are two important steps in the process of creating a
product, particularly in software development. They each refer to a different
kind of testing intended to ensure that the product is being developed correctly
and that the final product functions as it should.

1. Verification: Verification refers to the process of checking that the product has
been designed to meet specified requirements. It answers the question "Are we
building the product correctly?" Verification activities are typically associated
with reviews, walkthroughs, and inspections of software or systems. This can
also include things like desk checking, document review, code inspections, and
the creation and execution of test plans. The goal is to ensure that the system (or
subsystem) has been designed correctly, and that it accurately reflects the
specifications that were provided.
2. Validation: Validation is the process of evaluating a system or component
during or at the end of the development process to determine whether it satisfies
the specified requirements. It answers the question, "Are we building the right
product?" In other words, validation is about making sure the system serves its
intended purpose and meets the user's needs. Validation activities can involve
actual testing and demonstration. This can involve unit testing, integration
testing, system testing, and user acceptance testing. The goal is to confirm that
the outputs of the system are as expected.

To summarize, verification is about ensuring the product is designed correctly


according to the specified requirements, while validation is about ensuring that
the final product actually meets the users' needs. Both are important aspects of
quality assurance in any product development lifecycle.

2. fault", "error", and "bug" are all terms used in software engineering to refer to
problems that can occur in a software system. Here's a description of each:

1. Fault (also referred to as a defect): This term is often used to describe an


incorrect step, process, or data definition in a computer program. In other
words, it's a part of the code that's incorrect; it's a discrepancy between the
actual code and the intended code. If the fault is executed, the system may
behave unexpectedly or crash. This term is also sometimes used in hardware to
indicate a physical problem or malfunction.
2. Error: This term generally refers to a human action or decision that produces
an incorrect or unexpected result. In software development, it might be a
mistake made by a programmer that leads to a fault in the software. In the
operational phase, it's often used to refer to a mistake made by the user, for
example, if they input data in a format the software wasn't designed to handle.
3. Bug: This is a term popularly used to refer to any issue or problem in a
computer program that leads to an incorrect or unexpected result. Bugs can be
caused by both faults in the code and errors made by users. The term is derived
from a time when computers were large, room-sized machines and actual bugs
(insects) could short-circuit the hardware components causing faults.

In practice, these terms are often used somewhat interchangeably, but they do
have distinct meanings in more technical or precise discussions. It's also
important to note that the definitions can vary somewhat depending on the
specific context or development methodology being used.

3. It seems you're asking about terms related to mutation testing in software


development. Mutation testing is a form of software testing where certain
statements in the source code are changed/mutated to check if the test cases are
able to find the errors.

1. Ground String: The term "ground string" isn't standard in software engineering
or mutation testing. You might be referring to the "original" or "baseline"
version of the code, against which mutated versions ("mutants") are compared.
However, without additional context, it's difficult to provide a precise
definition.
2. Mutation Score: This is a measure used in mutation testing to indicate the
percentage of mutants that were killed (or detected) by the test suite. It is
calculated by the formula:
(Number of killed mutants / Total number of mutants) * 100
A higher mutation score indicates a more effective test suite, because it means
that a larger percentage of the introduced defects (mutations) were detected.
3. Mutants: In the context of mutation testing, a "mutant" is a version of the
program that has been altered in some small way -- typically, a single character
or line of code is changed. The purpose of creating mutants is to assess the
effectiveness of the test suite. If the tests still pass after the program has been
mutated (meaning that the tests didn't "kill" the mutant), this suggests that the
test suite may not be thorough enough.

In unit testing, test drivers and test stubs are used to isolate the module being
tested from other parts of the software system. This allows the tester to focus on
the specific functionality of the module without worrying about interactions
with other modules.
4. 1.Test Driver: A test driver is a piece of code that sets up and calls the
module under test, passing it appropriate parameters, and then examines
the output to determine whether it's correct or not. The test driver
simulates the calling module's functionality when the calling module is
absent or incomplete.
5. Test Stub: A test stub is a piece of code that simulates the activities of a
missing or incomplete module that the module under test calls during execution.
The stub receives input from the module being tested and returns something
appropriate, allowing the tester to focus on the module under test and not the
behavior of other modules or subsystems.
-------------------------------------
| Test Driver |
|-----------------------------------|
| | ||
| V ||
|---------------------------------| |
|| Module Under Test |<| |
||-------------------------------| |
|| | | |
|| V | |
||-------------------------------| |
||| Test Stub |<| |
|||-----------------------------| |
|| | |
|---------------------------------| |
| ||
-------------------------------------
• The Test Driver calls the Module Under Test.
• The Module Under Test then calls the Test Stub.
• The Test Stub returns a value to the Module Under Test.
• The Module Under Test returns a value to the Test Driver.

Note that in a real-world testing scenario, the actual interactions can be more
complex, and may involve multiple modules, drivers, and stubs.

In software testing, control flow graphs are used to represent the paths that
might be taken through a program during execution. Different types of coverage
metrics are used to assess the thoroughness of a set of test cases.
1. Node Coverage (Statement Coverage): Node coverage aims at executing
every node (or statement) in the control flow graph at least once. Each statement
corresponds to a node in the graph. It's the most basic type of coverage and
doesn't account for control flow.
For example, consider a simple program:
1. If A then
2. B
3. Else
4. C
5. Endif
6. D

1. For full node coverage, we need to execute statements A, B, C, and D at least


once.
2. Edge Coverage (Branch Coverage): Edge coverage aims at executing every
edge (or transition) in the control flow graph at least once. An edge represents
the flow of control from one statement to another and typically corresponds to
branches in the code.
For the example given above, we would need to follow both possible branches
of the "if" statement (A -> B and A -> C) and also the transitions from B -> D
and C -> D.
3. Prime Path Coverage: A prime path is a path that is not a subpath of any other
path. Prime path coverage aims at executing all such paths in the control flow
graph. It's more thorough than node or edge coverage because it considers
sequences of transitions.
In the example above, the prime paths might include [A, B, D] and [A, C, D],
depending on the exact nature of the statements and the control flow between
them.

Remember that while higher coverage can increase confidence in the quality of
a software product, no coverage metric can guarantee the absence of defects.
Other types of testing and quality assurance activities are also necessary.
6.

In software testing, particularly in data flow testing, the terms DU paths and DU
pairs are used to describe specific types of paths within a program with respect
to its variables. Here's what each term means:

1. DU Path (Definition-Use Path): A DU path for a particular variable in a


program is a path from the point where that variable is defined (i.e., given a
value) to a point where that value is used, without any redefinitions of the
variable along that path. A DU path can be complete, from the start to the end of
the program, or it can be a segment of a complete path.
2. DU Pair (Definition-Use Pair): A DU pair consists of the definition of a
variable and the use of that variable such that the use is reachable from the
definition (i.e., there exists a DU path from the definition to the use). DU pairs
are essential for data flow testing as they help identify the data dependencies in
the code and to generate test cases that ensure correct implementation of these
dependencies.

For example, consider the following simple program:

1. int x; // declaration
2. x = 5; // definition
3. print(x); // use
4. x = 10; // redefinition
5. print(x); // use

In this example, we have two DU pairs:

• Pair 1: the definition at line 2 and the use at line 3, and


• Pair 2: the definition at line 4 and the use at line 5.

Each pair represents a DU path. The first path goes from line 2 to line 3, and the
second path goes from line 4 to line 5. Note that the path from line 2 to line 5 is
not a DU path for the first definition because there's a redefinition at line 4.

7.

Input Domain Modeling is a software testing technique that involves


categorizing the input data into different classes and selecting representative
values from these classes for testing. The goal is to reduce the number of test
cases that need to be executed while still ensuring adequate coverage of the
application's functionality.

There are several approaches to input domain modeling:

1. Partitioning: This approach divides the input domain into a number of


"equivalence classes" based on some characteristic. Each equivalence class
should behave similarly, so only a few test cases per class should be necessary.
Equivalence classes can be valid (values that should be accepted by the
component under test) or invalid (values that should be rejected).
2. Boundary Value Analysis: This approach involves testing at the boundaries of
equivalence classes. Errors often occur at the boundaries of input domains, so
it's important to test these cases in addition to some nominal cases within each
equivalence class.
3. Decision Table Testing: This is a systematic approach used when the system
behavior is different for different combinations of input. The tester identifies
inputs and their corresponding outputs and creates a decision table. Each row of
the table represents a unique combination of inputs and their corresponding
output.
4. Cause-Effect Graphing: This method graphically represents the inputs and the
associated outputs, which are then used to derive test cases. It's particularly
useful when the input to output conversion is not straightforward and involves
various rules.
5. Error Guessing: Based on the tester's experience and intuition, this approach
involves guessing where errors are most likely to occur and creating tests
specifically for those situations.
6. Combinatorial Testing: In this approach, combinations of inputs are tested. It
can be applied in situations where different parameters are independent of each
other and can have different values.

These different approaches can be combined as needed to provide adequate


coverage of the input domain while keeping the number of test cases
manageable. The choice of which approaches to use depends on the nature of
the software being tested and the resources available for testing.
8.

Equivalence Class Partitioning and Boundary Value Analysis are both


techniques used in software testing to identify test cases in a systematic manner.

Equivalence Class Partitioning

Equivalence Class Partitioning involves dividing the input domain of a program


into classes of data from which test cases can be derived. The idea here is that
all data within an equivalence class is treated the same way by the program. If a
program works correctly for one case within an equivalence class, it's assumed
to work correctly for all cases in the class.

For example, let's consider a simple program that accepts an integer input
between 1 and 100 (inclusive). According to equivalence class partitioning, we
can divide the input into three equivalence classes:

1. Inputs less than 1 (Invalid)


2. Inputs between 1 and 100 (Valid)
3. Inputs greater than 100 (Invalid)

From these classes, we could select one representative value (for instance, -10
from the first class, 50 from the second class, and 150 from the third class) for
our testing.

Boundary Value Analysis

Boundary Value Analysis (BVA) is based on the observation that errors are
often found at the boundaries of the defined input domain. Using BVA, you
would test the boundaries of input ranges, rather than arbitrary points within the
range.

Continuing with the same program that accepts an integer between 1 and 100
(inclusive), for boundary value analysis, we would create test cases that include:

1. Values just below the lower boundary (e.g., 0)


2. The lower boundary itself (e.g., 1)
3. Values just above the lower boundary (e.g., 2)
4. Values just below the upper boundary (e.g., 99)
5. The upper boundary itself (e.g., 100)
6. Values just above the upper boundary (e.g., 101)

In practice, both techniques are often used together in testing: Equivalence


Class Partitioning is used to select representative values from within a range,
and Boundary Value Analysis is used to select representative values at the
boundaries of the range. Both aim to minimize the number of test cases while
maximizing the effectiveness of the testing process.

9.

Grey box testing is a software testing strategy that involves a combination of


black box and white box testing methods. This type of testing takes into account
the internal structure of the system (like white box testing), but it also tests the
system from an external or user perspective (like black box testing).

Here are some common techniques used in grey box testing:

1. Matrix Testing: This technique involves creating a matrix to show how


different variables or conditions interact with each other. For example, in a
login system, a matrix might show how the system responds to different
combinations of valid and invalid usernames and passwords. The goal is to
ensure that all possible combinations are handled correctly.
2. Regression Testing: This technique involves retesting a system after
modifications have been made, to ensure that the changes have not introduced
new bugs and that existing functionality still works as expected. For example, if
a bug was fixed in a sorting algorithm, regression testing might involve
rerunning existing tests to ensure that the fix didn't break anything else.
3. Pattern Testing: This technique involves understanding the software design
patterns used in the system, and creating tests based on these patterns. For
example, if a system uses the Observer design pattern, tests might be created to
ensure that when a change is made to an observed object, all observing objects
are updated correctly.
4. Orthogonal Array Testing (OAT): This is a systematic, statistical way of
testing pair-wise interactions. This method is highly useful when the number of
inputs to the system is large, but testing all possible inputs is not feasible. For
example, if you have a system that accepts three inputs, each with ten possible
values, OAT allows you to select a subset of test cases that cover all pair-wise
combinations without needing to test all 1,000 possible combinations.
5. State-Based Testing: This technique is used when a system or component has
predefined states. Test cases are designed to validate transitions from one state
to another and to validate that operations associated with the given state behave
as expected. For example, a media player can have states like playing, paused,
stopped, etc. Test cases would ensure that transitions between these states work
correctly (e.g., paused to playing, playing to stopped, etc.).

Remember that in any type of testing, the choice of technique depends on the
specifics of the system being tested, including its complexity, its risk profile, the
resources available for testing, and the stage of development.
10.

Symbolic execution is a method used in software testing and analysis where


instead of using actual data values to execute a program, symbolic values are
used. The path that the program can take is expressed as a path condition, and a
constraint solver tries to find actual data (concrete values) that can make the
program follow that path. The goal is to find as many feasible execution paths
as possible and validate the behavior of the program.

Here's a simple toy example to illustrate:

Let's say we have a program with a function fun(x, y):

def fun(x, y):


if x > 10:
return y + 1
else:
return y – 1

Now, if we execute this function symbolically with inputs x and y, we do not


use real values for x and y. Instead, we say that x and y are symbolic variables
that can take any value.

We can have two path conditions depending on the if condition:

1. x > 10, for which the output will be y + 1


2. x <= 10, for which the output will be y - 1

Next, a constraint solver will attempt to find concrete values for x and y that
will satisfy these path conditions. For example, it could choose x=11, y=5 for
the first path, and x=10, y=5 for the second path.

This way, symbolic execution can help in finding various paths in the program
and generate test cases (the concrete values) which cover these paths. It's
particularly effective in testing programs where the number of possible paths is
very large.
11.

1. Black Box Testing: In this type of testing, the tester doesn't need to know about
the internal workings of the system. The system is treated as a "black box", and
the focus is on inputs and outputs. The goal is to check that for given inputs, the
system produces the expected outputs.
2. White Box Testing: Also known as clear box or glass box testing, white box
testing involves understanding the internal workings of the system. Testers have
knowledge of the internal data structures and algorithms. This type of testing
often includes code coverage analysis, where the aim is to ensure that all paths
through the code are tested.
3. Grey Box Testing: This is a combination of black box and white box testing.
The tester has some knowledge of the system internals but focuses on testing
from a user's perspective. This allows for a more comprehensive test that covers
internal system logic as well as user interface and user experience.
4. Unit Testing: This is the process of testing individual components of a software
system in isolation. The purpose is to validate that each unit of the software
performs as designed. A unit can be an individual function, method, procedure,
module, or object in a software program.
5. Integration Testing: This type of testing aims to test the interfaces between
components against a software design. It checks how different units interact
with each other and that the system works correctly when these components are
integrated.
6. System Testing: In system testing, the entire system is tested as a whole. The
purpose is to evaluate the system's compliance with the specified requirements.
This is typically a high-level test designed to evaluate the system's end-to-end
functionality.
7. Acceptance Testing: This is a type of testing performed to determine whether a
system satisfies the requirements specified in the initial design phase. The main
aim of this testing is to evaluate the system’s compliance with the business
requirements and assess whether it is acceptable for delivery. It can be
performed by the client, an external entity, or users to validate the functionality
against the business requirements.

Remember that the types of testing required depend on the specifics of the
system being tested, including its complexity, its risk profile, the resources
available for testing, and the stage of development. Also, this list is not
exhaustive; there are many other types of testing, including regression testing,
performance testing, security testing, etc.

12.
a.

Here's a brief overview of the coverage criteria you mentioned:

1. Functional Coverage: This type of coverage is a measure of how much of the


functional requirements of a system have been tested. The aim is to create tests
for each function defined in the system's functional requirements. In your code
fragment, it would mean having at least one test that invokes the foo function
and checks whether it performs as expected based on its requirements.
2. Statement Coverage: Statement coverage is a measure of how many of the
lines of source code have been executed during testing. For your code fragment,
complete statement coverage would require tests that cause each of the lines of
code to be executed. This would include at least one test where x and y are both
greater than 0 (to execute the code inside the if statement), and at least one test
where either x or y is not greater than 0 (to execute the function return outside
the if statement).
3. Conditional Coverage: Also known as predicate or decision coverage, this is a
measure of how many of the boolean expressions have been tested for both true
and false results. In your code fragment, complete conditional coverage would
require tests that make the condition (x > 0) && (y > 0) both true and false. For
instance, one test might have x and y both greater than 0 (making the condition
true), while another test could have either x or y (or both) less than or equal to 0
(making the condition false).
4. Branch Coverage: Branch coverage is a measure of how many of the possible
paths through the program's control structures have been followed. In your code
fragment, there are two branches: one for the path followed when (x > 0) && (y
> 0) is true, and one for when this condition is false. Complete branch coverage
would require at least two tests: one that follows the first branch (where x and y
are both greater than 0), and one that follows the second branch (where either x
or y is not greater than 0).

Note that these are basic coverage criteria and the code snippet given is simple.
For complex code, achieving 100% coverage on all these criteria can be
challenging and sometimes not feasible or cost-effective. Often, a mix of these
criteria is used to achieve an acceptable level of coverage depending on the
criticality of the system, available resources, and other factors.

13.a

In dynamic unit testing, we test individual components of the software, often in


isolation from the rest of the system. The testing environment must be set up to
allow each component to function as it would in the complete system, which
often involves creating test drivers to invoke the unit under test and stubs to
simulate the behavior of other units that the unit under test interacts with.

Here's a simple diagram that might illustrate this:

+----------------------------------------------------+
| Dynamic Unit Test Environment |
| |
| +------------+ +-------------+ |
| | Test | | Unit | |
| | Driver | ---> | Under | |
| | | | Test | |
| +------------+ +-----^-------+ |
| | |
| +------------+ | |
| | Test | | |
| | Stub | <--- | |
| +------------+ | |
+----------------------------------------------------+
In the diagram:

• The "Test Driver" is a piece of code that sets up the necessary inputs and calls
the unit under test. The driver might be written specifically for the test, or it
might be a part of the actual system that is used to invoke the unit under test.
• The "Unit Under Test" is the individual component that is being tested. In
dynamic unit testing, this might be a single function, method, module, or object.
• The "Test Stub" is a piece of code that simulates the behavior of a unit that the
unit under test interacts with. Stubs provide predefined responses to calls made
during the test, allowing the unit under test to be isolated from the rest of the
system.

In a dynamic unit test environment, the test driver calls the unit under test,
passing in the necessary inputs. The unit under test might make calls to other
units, which are intercepted by the stubs. Once the unit under test has finished
executing, the test driver checks the output against the expected results. If the
output matches the expected results, the test passes. If not, the test fails.

13.b

Control flow testing and data flow testing are two distinct methods used in the
field of software testing. Here's an overview of each type and their major
differences:

Control Flow Testing:

Control flow testing is a type of testing where the tester checks the sequence of
execution of various commands and processes in a program. In this method,
testers typically use a Control Flow Graph (CFG) to represent the logical order
in which different commands are executed in a program. The nodes in this
graph represent different parts of the program, and edges represent the control
flow between them.

In control flow testing, the tester usually focuses on statement coverage, branch
coverage, and path coverage to ensure that all the important control paths in the
program have been tested.

Data Flow Testing:


Data Flow Testing is a testing method where the tester focuses on the data
variables and their values used in the program. The tester tracks the changes of
these data variables from their declaration to their last use to make sure the data
flows correctly and is properly manipulated.

In data flow testing, the tester is interested in ensuring all variables are defined
before they are used (definition-use testing), and that all defined variables are
used correctly (definition-clear path testing).

Major Difference:

The major difference between control flow testing and data flow testing lies in
their primary focus. While control flow testing is more concerned about the
sequence and flow of program control (i.e., which parts of the program are
executed, in what order, and under what conditions), data flow testing is
concerned about how data values are initialized, manipulated, and utilized
throughout the program.

In short, control flow testing is about the "paths" that execution can take
through the program, while data flow testing is about the "values" that data can
take as it flows through the program.

Both methods are essential for comprehensive software testing and often used
together to ensure the correct functionality of a software system.
14.a

Mutation testing is a type of software testing where certain statements of the


source code are changed or "mutated" to check if the test cases can find the
errors. This is typically used to evaluate the quality of the test suite and its
ability to detect faults. Here are seven types of common mutation operators:

1. Statement Deletion (SDL): This involves deleting a statement from the source
code. For example, if we have a line z = x + y; in our code, the mutated code
would simply omit this line.
2. Statement Insertion (STI): This involves adding a statement to the source
code. For instance, we might add a line x = 0; to our code.
3. Statement Replacement (SRT): This involves replacing a statement with
another. For instance, if we have a line z = x + y;, we might replace it with z =
x - y;.
4. Alteration of Operand (AOR): This involves changing an operand in the
source code. For instance, if we have a line z = x + y;, we might change it to z =
x + 10;.
5. Alteration of Operator (AORU/AORS): This involves changing an operator
in the source code. For instance, if we have a line z = x + y;, we might change it
to z = x * y;.
6. Conditional Operator Replacement (COR): This involves changing a logical
operator in a condition. For instance, if we have a line if (x > y), we might
change it to if (x < y).
7. Variable Replacement (VRS): This involves changing a variable to another
valid variable. For instance, if we have a line z = x + y;, we might change it to z
= a + y;.

Keep in mind that the goal of mutation testing is not to permanently modify the
code but to generate variations (mutants) of the original code to test the strength
of your test cases. If a test suite fails to detect a mutant, it suggests a potential
weakness in the testing strategy.

15.a

1. Touring: This is like taking a general overview or walk-through of the software


system. The tester tries to understand how the system works, what it does, its
features, and functionalities. In a sense, it's like taking a guided tour of a city.
For example, if you're testing a new email application, you might "tour" the
system by opening the application, examining the inbox, sending a test email,
checking the settings, and so on.
2. Side Trips: These involve testing features that are not the main focus but could
be interesting or potentially problematic. You could think of it as taking a side
trip while on a tour to visit an interesting place not covered by the main
itinerary. For instance, in the email application, a side trip could be testing how
the application handles a large number of emails, or how it behaves when you
try to send an email with an extremely large attachment.
3. Detours: These are unexpected paths or events that occur while testing. You
might encounter an unexpected behavior and decide to "detour" from your
planned test to investigate it further. It's like making a spontaneous decision to
go off-route during a city tour to check out something that piqued your interest.
In our email application example, a detour could be spotting a strange behavior
when sending an email, and deciding to investigate that issue further, even
though it was not part of your initial test plan.

These metaphors help testers to conceptualize their approaches to exploratory


testing and to communicate their strategies and findings to others.

Draw the figure from ppt


16.b

Call Graph: A call graph is a directed graph that represents calling


relationships between subroutines in a computer program. Each node represents
a procedure and each edge (f, g) indicates that procedure f calls procedure g. So,
the graph gives a picture of how control flows through the program.
For example, consider a simple program where Main calls function A, and
function A calls function B. The call graph would look like this

Main

Function A

Function B

Inheritance Graph: In object-oriented programming, an inheritance graph


represents the inheritance relationships between classes. Nodes represent classes
and an edge from class A to class B represents that B is a subclass of A (B
inherits from A).
For example, consider a simple inheritance structure with a Vehicle class, and
Car and Bicycle classes that inherit from Vehicle. The inheritance graph would
look like this:

Vehicle

/ \

v v

Car Bicycle

1. Coupling DU Pairs: In data flow testing, DU pairs (Definition-Use pairs) are


used to identify areas in the program where a variable is defined and then used.
Coupling DU pairs extend this idea to inter-procedural data flow analysis,
which means the definition and use of the variables are across multiple
procedures or methods.
For example, if we have two methods, methodA and methodB. methodA
defines and assigns a value to variable x and methodB uses x in some
calculation. Then (methodA, methodB) forms a coupling DU pair for the
variable x.

Remember, these graphs and concepts are tools that help us understand and
communicate about complex relationships within a software program. They are
particularly useful in managing and analyzing large codebases.

17.a

Functional testing is a type of software testing where the system is tested


against the functional requirements or specifications. Here are the four
important steps involved in functional testing:

1. Understanding the Requirements: The first step in functional testing is to


understand the software's functional requirements. This includes reading the
product documentation, user stories, use cases, or any other specifications
provided, and understanding the software's intended behavior, features, and
functions.
2. Creating Test Cases: Based on the functional requirements, the tester then
creates test cases that will be used to verify the software's functionality. Each
test case should have a clear description of the scenario to be tested, the inputs
to be used, the expected results, and the actual results.
3. Executing Test Cases: Once the test cases are created, the next step is to
execute them. This involves running the software with each set of inputs and
comparing the actual results with the expected results. Any discrepancies are
noted as potential issues.
4. Reporting and Re-testing: After executing the test cases, any issues found are
reported to the development team. Once the issues have been fixed, the tester
re-tests the software to confirm that the fixes work as expected.

It's important to note that functional testing can be performed at various levels
of testing such as unit testing, integration testing, system testing, and acceptance
testing. The goal of functional testing is to ensure that the software behaves as
expected and meets all of the functional requirements specified by the users and
stakeholders.

18.a
The classification of triangles is based on the lengths of its sides. A scalene
triangle has all sides of different lengths, an isosceles triangle has at least two
sides of equal length, and an equilateral triangle has all sides of equal length. A
right-angled triangle has one angle that measures 90 degrees, which by
Pythagorean theorem means that the square of the length of one side equals the
sum of squares of lengths of the other two sides.

(i) For the boundary condition A + B > C (scalene triangle), this is the
triangle inequality theorem that states the sum of the lengths of any two sides of
a triangle must be greater than the length of the third side. This condition is
necessary for a triangle to exist.

• Test Case 1: (A=2, B=2, C=3) - this case is right on the boundary where A + B
= C. This should not form a triangle.
• Test Case 2: (A=2, B=2, C=4) - this case is beyond the boundary where A + B <
C. This should not form a triangle.
• Test Case 3: (A=3, B=4, C=5) - this case satisfies the condition A + B > C. This
should form a scalene triangle.

(ii) For the boundary condition A = C (isosceles triangle), this condition


represents an isosceles triangle where two sides are of equal length.

• Test Case 1: (A=2, B=3, C=2) - this case is on the boundary where A = C, but A
≠ B. This should form an isosceles triangle.
• Test Case 2: (A=2, B=2, C=2) - this case is beyond the boundary where A = B =
C. This should form an equilateral triangle, not an isosceles triangle.
• Test Case 3: (A=3, B=4, C=5) - this case is not on the boundary where A ≠ C.
This should form a scalene triangle, not an isosceles triangle.

(iii) For the boundary condition A = B = C (equilateral triangle), this


condition represents an equilateral triangle where all sides are of equal length.

• Test Case 1: (A=2, B=2, C=2) - this case is on the boundary where A = B = C.
This should form an equilateral triangle.
• Test Case 2: (A=2, B=2, C=3) - this case is beyond the boundary where A = B,
but A ≠ C. This should form an isosceles triangle, not an equilateral triangle.
• Test Case 3: (A=3, B=4, C=5) - this case is not on the boundary where A ≠ B ≠
C. This should form a scalene triangle, not an equilateral triangle.

18.b
Case A B C A+B>C A=B B=C A=C Result

1 2 2 4 No Yes No No Not a Triangle

2 2 2 2 Yes Yes Yes Yes Equilateral

3 3 4 5 Yes No No No Scalene

4 5 5 7 Yes Yes No No Isosceles

5 5 7 5 Yes No No Yes Isosceles

Explanation:

• Case 1: A + B = C which does not satisfy the triangle inequality condition, so


it's not a triangle.
• Case 2: All sides are equal and the sum of any two sides is greater than the third
side, so it's an equilateral triangle.
• Case 3: No sides are equal and the sum of any two sides is greater than the third
side, so it's a scalene triangle.
• Case 4: Two sides are equal and the sum of any two sides is greater than the
third side, so it's an isosceles triangle.
• Case 5: Two sides are equal and the sum of any two sides is greater than the
third side, so it's an isosceles triangle.

You can add more rows to the table to cover other possible cases. For example,
you may want to test the behavior when one or more of the inputs are zero or
negative. These test cases would help you to more thoroughly verify the
correctness of the program.

19.a

Grey box testing, also known as gray box testing or translucent testing, is a
software testing technique that combines elements of both black box testing and
white box testing. In grey box testing, the tester has partial knowledge of the
internal workings of the system, allowing for a more comprehensive and
effective testing approach.

Importance of Grey Box Testing:

1. Better Test Coverage: Grey box testing provides a middle ground between
black box and white box testing. It allows testers to gain a deeper understanding
of the system, enabling them to create test cases that cover critical paths and
scenarios that may be missed in black box testing.
2. Effective Bug Detection: Grey box testing helps in identifying defects or bugs
that may not be easily detectable through black box testing alone. With partial
knowledge of the internal structure, testers can focus their efforts on areas that
are more likely to have issues.
3. Validation of Internal Logic: Grey box testing helps validate the internal logic
and algorithms of the system. Testers can design tests that specifically target
complex decision-making processes or data manipulations within the system.
4. Improved Test Efficiency: By having limited knowledge of the internal
workings, testers can optimize their testing efforts. They can prioritize test cases
based on critical areas or modules of the system, making the testing process
more efficient.

Advantages of Grey Box Testing:

1. Comprehensive Testing: Grey box testing combines the strengths of both


black box and white box testing, allowing for a more thorough examination of
the system's functionality and internal structure.
2. Early Detection of Issues: Grey box testing can help identify defects at an
early stage of the development cycle, leading to better bug fixing and reducing
the cost of fixing issues later on.
3. Realistic Testing Environment: Grey box testing is closer to real-world
scenarios as it takes into account both user perspective and internal system
operations, leading to more realistic testing.

Disadvantages of Grey Box Testing:


1. Partial Visibility: Testers may not have complete knowledge of the system's
internal structure, which can limit the scope of testing and potentially overlook
certain critical areas.
2. Dependency on Documentation: Grey box testing often relies on available
documentation or specifications to gain insight into the system. Inadequate or
outdated documentation can impact the effectiveness of the testing process.
3. Time and Resource Constraints: Grey box testing can require additional time
and resources compared to black box testing due to the need for analysis of the
internal workings and the design of specific test cases.

Overall, grey box testing strikes a balance between the external behavior of the
system (black box testing) and the internal logic (white box testing). It can
provide valuable insights and improve the quality and reliability of the software
by targeting specific areas that may have been missed in black box testing
alone.

19.b

Symbolic execution is a technique used in software testing and analysis to


reason about the potential execution paths of a program using symbolic values
instead of concrete inputs. A symbolic execution tree (also known as a symbolic
execution graph) is a representation of the different paths and decisions taken
during the symbolic execution process.

Let's consider a simple example to understand the concept of a symbolic


execution tree:

def max_of_three(a, b, c):

if a > b:

if a > c:

return a

else:

return c

else:

if b > c:

return b
else:

return c

In symbolic execution, instead of using concrete values for a, b, and c, we use


symbolic values that can take any valid value. Let's symbolically execute the
above code with symbolic values for a, b, and c.

1. Start with a symbolic execution tree with the initial condition: a > b.

(a > b)

/ \

2. On the left branch, we have the condition a > c. On the right branch, we have
the condition b > c.

(a > b)

/ \

(a > c) (b > c)

/ \ / \

3. For the left branch, we symbolically execute the code path where a > c is true.
We assign a symbolic value a to be greater than b and c.

(a > b)

/ \

(a > c) (b > c)

/ \ / \

(a > c) Return Return

4. For the right branch, we symbolically execute the code path where b > c is true.
We assign a symbolic value b to be greater than a and c.

(a > b)
/ \

(a > c) (b > c)

/ \ / \

(a > c) Return Return

Return

The resulting symbolic execution tree represents all possible execution paths
through the program. Each node in the tree represents a decision point or branch
in the code, and the edges represent the different outcomes of those decisions.

Symbolic execution trees are useful for test case generation, code coverage
analysis, and identifying potential issues in the program. By exploring the
different paths and decision points in the program, symbolic execution can help
identify code vulnerabilities, unreachable code, and inadequate test coverage.

20.a

To explain the symbolic execution of the POWER procedure with symbolic


values X and Y (denoted as al and a2), we will go through the code fragment
step by step, considering the symbolic inputs.

1. POWER: PROCEDURE(X, Y);: This line defines the POWER procedure that
takes two parameters X and Y.
2. Z←1;: We assign a symbolic value Z as 1.
3. J+1;: We assign a symbolic value J and increment it by 1.
4. LAB: IF Y > J THEN: We compare the symbolic value Y with the symbolic
value J.
5. DO; Z+ Z* X;: If the condition Y > J is true, we enter the loop. We update the
symbolic value of Z by multiplying it with the symbolic value of X and adding
it to itself.
6. J< J+ 1;: We increment the symbolic value J by 1.
7. GO TO LAB; END;: We jump back to the LAB label, re-evaluating the
condition Y > J. If the condition is still true, we continue the loop. Otherwise,
we proceed to the next line.
8. RETURN (Z);: We return the symbolic value of Z.
9. END;: End of the POWER procedure.
The symbolic execution of POWER (al, a2) generates a symbolic execution
path through the code, accounting for different possible values of X and Y.
Since we have symbolic values for X and Y, the execution path is symbolic and
represents multiple potential execution paths.

For example, if al represents a symbolic value for X and a2 represents a


symbolic value for Y, the symbolic execution of POWER (al, a2) will consider
all possible combinations of al and a2 during the execution, exploring different
execution paths based on the conditions and loops in the code.

By performing symbolic execution, we can reason about the behavior of the


POWER procedure for various input combinations without needing concrete
values. This allows us to analyze the program's behavior and identify potential
issues or explore different scenarios without explicitly executing the code with
specific inputs.

20.b

To explain the symbolic execution of the POWER procedure with symbolic


values a1 and a2, let's go through the code fragment step by step and construct
the execution tree for the inputs POWER(a1, a2).

Execution Tree for POWER(a1, a2):

POWER(a1, a2)

---------------

| |

Z=1 J=1

| |

--------------- LAB: Y > J

| | (a2 > 1)?

Z+ J+1 |

| |

------------- DO
| | |

Z* a1 J< Z+ (Z* a1)

| | |

-------------- | --------------

| | | | |

Z* (a1* a1) J+1 | Z* (a1* a1) J< ...

| | | | |

-------------- | --------------

| | | | |

Z* (a1* a1* a1) J< ... Z* (a1* a1* a1) J< ...

| | | | |

-------------- ... | --------------

| | | |

Z* (a1^a2) J< ... | |

| | | |

Return Return Return Return

Explanation:

• The execution tree starts with the root node representing the initial call
POWER(a1, a2).
• At each level of the tree, we encounter nodes representing the values of
variables (Z and J) and the condition Y > J.
• If the condition Y > J is true (a2 > 1), we follow the left branch, which leads to
the DO loop.
• Inside the loop, the value of Z is updated by multiplying it with X (a1). The
value of J is incremented by 1.
• After the loop, we go back to the LAB label and re-evaluate the condition Y >
J. If the condition is true, we continue with the loop. Otherwise, we proceed to
the RETURN statement.
• The RETURN statement returns the final value of Z.
• The execution tree continues to expand and explore all possible paths based on
the symbolic values of a1 and a2.
• The tree branches out exponentially as the loop can be executed multiple times
based on the value of a2.

The execution tree visually represents the various execution paths and decisions
taken during the symbolic execution of the POWER procedure with the
symbolic inputs a1 and a2. Each node represents a state in the execution
process, and the edges represent the flow of control. This tree allows us to
reason about the behavior of the code for different input combinations and
analyze its coverage and potential issues.

You might also like