The Test Management Guide Software Test Guide RSS Feed
*
Accessibility

Background

BS7925-2 Standard for Software Component Testing

The objective of this Standard is to enable the measurement and comparison of testing performed on software components. This will enable users of this Standard to directly improve the quality of their software testing, and improve the quality of their software products.

BS 7925 - 2

Standard for Software Component Testing


Working Draft 3.3
Date: 28 April 1997

produced by the

British Computer Society
Specialist Interest Group in Software Testing
(BCS SIGIST)


Copyright Notice

This document may be copied in its entirety or
extracts made if the source is acknowledged.

 

 

This standard is available from the British Standards Institute (BS7925-2)



       Contents

Foreword........................................................................................................................

Introduction....................................................................................................................

1                      Scope.....................................................................................................

2                      Process..................................................................................................

3                      Test Case Design Techniques.................................................................

            3.1                          Equivalence Partitioning...........................................................

            3.2                          Boundary Value Analysis.........................................................

            3.3                          State Transition Testing...........................................................

            3.4                          Cause-Effect Graphing.............................................................

            3.5                          Syntax Testing........................................................................

            3.6                          Statement Testing...................................................................

            3.7                          Branch/Decision Testing..........................................................

            3.8                          Data Flow Testing...................................................................

            3.9                          Branch Condition Testing.........................................................

            3.10                        Branch Condition Combination Testing......................................

            3.11                        Modified Condition Decision Testing........................................

            3.12                        LCSAJ Testing......................................................................

            3.13                        Random Testing....................................................................

            3.14                        Other Testing Techniques.......................................................

4                      Test Measurement Techniques.............................................................

            4.1                          Equivalence Partition Coverage...............................................

            4.2                          Boundary Value Coverage......................................................

            4.3                          State Transition Coverage.......................................................

            4.4                          Cause-Effect Coverage...........................................................

            4.5                          Syntax Coverage...................................................................

            4.6                          Statement Coverage..............................................................

            4.7                          Branch And Decision Coverage...............................................

            4.8                          Data Flow Coverage...............................................................

            4.9                          Branch Condition Coverage.....................................................

            4.10                        Branch Condition Combination Coverage..................................

            4.11                        Modified Condition Decision Coverage.....................................

            4.12                        LCSAJ Coverage...................................................................

            4.13                        Random Testing....................................................................

            4.14                        Other Test Measurement Techniques......................................

Annex A           Process Guidelines...............................................................................

Annex B           Guidelines For Testing Techniques And Test Measurement.................

            B.1                         Equivalence Partitioning.........................................................

            B.2                         Boundary Value Analysis.......................................................

            B.3                         State Transition Testing.........................................................

            B.4                         Cause Effect Graphing...........................................................

            B.5                         Syntax Testing......................................................................

            B.6                         Statement Testing And Coverage............................................

            B.7                         Branch/Decision Testing........................................................

            B.8                         Data Flow Testing..................................................................

            B.9 / B.10 / B.11      Condition Testing...................................................................

            B.12                       LCSAJ Testing......................................................................

            B.13                       Random Testing....................................................................

            B.14                       Other Testing Techniques.......................................................

Annex C           Test Technique Effectiveness................................................................

Annex D           Bibliography.........................................................................................

Annex E           Document Details..................................................................................

        



       Foreword


This working draft of the Standard replaces all previous versions.  The previous edition was working draft 3.2, dated 6 Jan 1997.



       Introduction


The history of the standard

A meeting of the Specialist Interest Group on Software Testing was held in January 1989 (this group was later to affiliate with the British Computer Society).  This meeting agreed that existing testing standards are generally good standards within the scope which they cover, but they describe the importance of good test case selection, without being specific about how to choose and develop test cases.

The SIG formed a subgroup to develop a standard which addresses the quality of testing performed.  Draft 1.2 was completed by November 1990 and this was made a semi-public release for comment.  A few members of the subgroup trialled this draft of the standard within their own organisations.  Draft 1.3 was circulated in July 1992 (it contained only the main clauses) to about 20 reviewers outside of the subgroup.  Much of the feedback from this review suggested that the approach to the standard needed re-consideration.

A working party was formed in January 1993 with a more formal constitution.  This has resulted in Working Draft 3.3.

Aims of the standard

The most important attribute of this Standard is that it must be possible to say whether or not it has been followed in a particular case (i.e. it must be auditable).  The Standard therefore also includes the concept of measuring testing which has been done for a component as well as the assessment of whether testing met defined targets.

There are many challenges in software testing, and it would be easy to try and address too many areas, so the standard is deliberately limited in scope to cover only the lowest level of independently testable software.  Because the interpretation of and name for the lowest level is imprecise, the term "component" has been chosen rather than other common synonyms such as "unit", "module", or "program" to avoid confusion with these more common terms and remain compatible with them.

 



1     Scope


1.1     Objective

The objective of this Standard is to enable the measurement and comparison of testing performed on software components.  This will enable users of this Standard to directly improve the quality of their software testing, and improve the quality of their software products.

1.2     Intended audience

The target audience for this Standard includes:

-  testers and software developers;

-  managers of testers and software developers;

-  procurers of software products or products containing software;

-  quality assurance managers and personnel;

-  academic researchers, lecturers, and students;

-  developers of related standards.

 

1.3     Approach

This Standard prescribes characteristics of the test process.

The Standard describes a number of techniques for test case design and measurement, which support the test process.

1.4     What this Standard covers

1.4.1     Specified components.  A software component must have a specification in order to be tested according to this Standard.  Given any initial state of the component, in a defined environment, for any fully-defined sequence of inputs and any observed outcome, it shall be possible to establish whether or not the component conforms to the specification.

1.4.2     Dynamic execution.  This Standard addresses dynamic execution and analysis of the results of execution.

1.4.3     Techniques and measures.  This Standard defines test case design techniques and test measurement techniques.  The techniques are defined to help users of this Standard design test cases and to quantify the testing performed.  The definition of test case design techniques and measures provides for common understanding in both the specification and comparison of software testing.

1.4.4     Test process attributes.  This Standard describes attributes of the test process that indicate the quality of the testing performed.  These attributes are selected to provide the means of assessing, comparing and improving test quality.

1.4.5     Generic test process.  This Standard defines a generic test process. A generic process is chosen to ensure that this Standard is applicable to the diverse requirements of the software industry.

1.5     What this Standard does not cover

1.5.1     Types of testing.  This Standard excludes a number of areas of software testing, for example:

-  integration testing;

-  system testing;

-  user acceptance testing;

-  statistical testing;

-  testing of non-functional attributes such as performance;

-  testing of real-time aspects;

-  testing of concurrency;

-  static analysis such as data flow or control flow analysis;

-  reviews and inspections (even as applied to components and their tests).

 

A complete strategy for all software testing would cover these and other aspects.

1.5.2     Test completion criteria.  This Standard does not prescribe test completion criteria as it is designed to be used in a variety of software development environments and application domains.  Test completion criteria will vary according to the business risks and benefits of the application under test.

1.5.3     Selection of test case design techniques.  This Standard does not prescribe which test case design techniques are to be used.  Only appropriate techniques should be chosen and these will vary according to the software development environments and application domains.

1.5.4     Selection of test measurement techniques.  This Standard does not prescribe which test measurement techniques are to be used.  Only appropriate techniques should be chosen and these will vary according to the software development environments and application domains.

1.5.5     Personnel selection.  This Standard does not prescribe who does the testing.

1.5.6     Implementation.  This Standard does not prescribe how required attributes of the test process are to be achieved, for example, by manual or automated methods.

1.5.7     Fault removal.  This Standard does not address fault removal.  Fault removal is a separate process to fault detection.

1.6     Conformity

Conformity to this Standard shall be by following the testing process defined in clause 2.

1.7        Normative reference

The following standard contains provisions which, through reference in this text, constitute provisions of the Standard.  At the time of publication, the edition was valid.  All standards are subject to revision, and parties to agreements based on the Standard are encouraged to investigate the possibility of applying the most recent edition of the standard listed below.  Members of IEC and ISO maintain registers of currently valid International Standards.

ISO 9001:1994, Part 1: 1994 Specification for design/development, production, installation and servicing.



2     Process


2.1        Pre-rRequisites

Before component testing may begin the component test strategy (2.1.1) and project component test plan (2.1.2) shall be specified.

2.1.1     Component test strategy

2.1.1.1  The component test strategy shall specify the techniques to be employed in the design of test cases and the rationale for their choice.  Selection of techniques shall be according to clause 3.  If techniques not described explicitly in this clause are used they shall comply with the 'Other Testing Techniques' clause (3.13).

2.1.1.2  The component test strategy shall specify criteria for test completion and the rationale for their choice.  These test completion criteria should be test coverage levels whose measurement shall be achieved by using the test  measurement techniques defined in clause 4.  If measures not described explicitly in this clause are used they shall comply with the 'Other Test Measurement Techniques' clause (4.13).

2.1.1.3  The component test strategy shall document the degree of independence required of personnel designing test cases from the design process, such as:

a)         the test cases are designed by the person(s) who writes the component under test;

b)         the test cases are designed by another person(s);

c)         the test cases are designed by a person(s) from a different section;

d)         the test cases are designed by a person(s) from a different organisation;

e)         the test cases are not chosen by a person.

2.1.1.4  The component test strategy shall document whether the component testing is carried out using isolation, bottom-up or top-down approaches,  or some mixture of these.

2.1.1.5  The component test strategy shall document the environment in which component tests will be executed.  This shall include a description of the hardware and software environment in which all component the tests will be run. and any other software with which the component interacts when under test, including drivers, stubs and testing tools.

2.1.1.6  The component test strategy shall document the test process that shall be used for component testing.

2.1.1.7  The test process documentation shall define the testing activities to be performed and the inputs and outputs of each activity.

2.1.1. 8  For any given test case, the test process documentation shall require that the following activities occur in the following sequence:

a)         Component Test Planning;

b)         Component Test Specification;

c)         Component Test Execution;

d)         Component Test Recording;

e)         Checking for Component Test Completion.

2.1.1. 9  Figure 2.1 illustrates the generic test process described in clause 2.1.1.8.  Component Test Planning shall begin the test process and Checking for Component Test Completion shall end it; these activities are carried out for the whole component. Component Test Specification, Component Test Execution, and Component Test Recording may however, on any one iteration, be carried out for a subset of the test cases associated with a component.  Later activities for one test case may occur before earlier activities for another.

2.1.1. 10            Whenever an error is corrected by making a change or changes to test materials or the component under test, the affected activities shall be repeated.

 

 

Figure 2.1 Generic Component Test Process

2.1.2     Project component test plan

2.1.2.1  The project component test plan shall specify the dependencies between component tests and their sequence.  Their derivationThese shall include consideration ofbe derived from the chosen approach to component testing (2.1.1.4), but may also be influenced by overall project management and work scheduling considerations..

2.2        Component test planning

2.2.1     The component test plan shall specify how the component test strategy (2.1.1) and project component test plan (2.1.2)pre-requisites (2.1) apply to the given component under test.  This shall include specific identification of all exceptions to the component test strategy and all software with which the component under test will interact during test execution, such as drivers and stubs.

2.3        Component test specification

2.3.1     Test cases shall be designed using the test case design techniques selected in the test planning activity.

2.3.2     The specific test specification requirements for each test case design technique are defined in clause 3.  Each test case shall be specified by defining its objective, the initial state of the component, its input, and the expected outcome. The objective shall be stated in terms of the test case design technique being used, such as the partition boundaries exercised.

2.3.3     The execution of each test case shall be repeatable.

2.4        Component test execution

2.4.1     Each test case shall be executed and shall be repeatable.

2.5        Component test recording

2.5.1     The test records for each test case shall unambiguously record the identities and versions of the component under test and the test specification.  The actual outcome shall be recorded.  It shall be possible to establish that the all specified testing activities have been carried out by reference to the test records.

2.5.2     The actual outcome shall be compared against the expected outcome.  Any discrepancy found shall be logged and analysed in order to establish where the error lies and the earliest test activity that should be repeated in order to remove the discrepancy in the test specification or verify the removal of the fault in the component.

2.5.3     The test coverage levels achieved for those measures specified as test completion criteria shall be recorded.

2.6        Checking for component test completion

2.6.1     The test records shall be checked against the previously specified test completion criteria.  If these criteria are not met, the earliest test activity that must be repeated in order to meet the criteria shall be identified and the test process shall be restarted from that point.

2.6.2     It may be necessary to repeat the Test Specification activity to design further test cases to meet a test coverage target.



3     Test Case Design Techniques


3.1     Equivalence Partitioning

3.1.1     Analysis.  Equivalence partitioning uses a model of the component that partitions the input and output values of the component.  The input and output values are derived from the specification of the component's behaviour.

The model shall comprise partitions of input and output values.  Each partition shall contain a set or range of values, chosen such that all the values can reasonably be expected to be treated by the component in the same way (i.e. they may be considered 'equivalent').  Both valid and invalid values are partitioned in this way.

3.1.2     Design.  Test cases shall be designed to exercise partitions.  A test case may exercise any number of partitions.  A test case shall comprise the following:

-  the input(s) to the component;

-  the partitions exercised;

-  the expected outcome of the test case.

 

Test cases are designed to exercise partitions of valid values, and invalid input values.  Test cases may also be designed to test that invalid output values cannot be induced.

3.2     Boundary Value Analysis

3.2.1     Analysis.  Boundary Value Analysis uses a model of the component that partitions the input and output values of the component into a number of ordered sets with identifiable boundaries.  These input and output values are derived from the specification of the component's behaviour.

The model shall comprise bounded partitions of ordered input and output values.  Each partition shall contain a set or range of values, chosen such that all the values can reasonably be expected to be treated by the component in the same way (i.e. they may be considered 'equivalent').  Both valid and invalid values are partitioned in this way.  A partition's boundaries are normally defined by the values of the boundaries between partitions, however where partitions are disjoint the minimum and maximum values in the range which makes up the partition are used.  The boundaries of both valid and invalid partitions are considered.

3.2.2     Design.  Test cases shall be designed to exercise values both on and next to the boundaries of the partitions.  For each identified boundary three test cases shall be produced corresponding to values on the boundary and an incremental distance either side of it.  This incremental distance is defined as the smallest significant value for the data type under consideration.  A test case shall comprise the following:

-  the input(s) to the component;

-  the partition boundaries exercised;

-  the expected outcome of the test case.

 

Test cases are designed to exercise valid boundary values, and invalid input boundary values.  Test cases may also be designed to test that invalid output boundary values cannot be induced.

3.3     State Transition Testing

3.3.1     Analysis.  State transition testing uses a model of the states the component may occupy, the transitions between those states, the events which cause those transitions, and the actions which may result from those transitions.

The model shall comprise states, transitions, events, actions and their relationships.  The states of the model shall be disjoint, identifiable and finite in number.  Events cause transitions between states, and transitions can return to the same state where they began.  Events will be caused by inputs to the component, and actions in the state transition model may cause outputs from the component.

The model will typically be represented as a state transition diagram, state transition model, or a state table.

3.3.2     Design.  Test cases shall be designed to exercise transitions between states.  A test case may exercise any number of transitions.  For each test case, the following shall be specified:

-  the starting state of the component;

-  the input(s) to the component;

-  the expected outputs from the component;

-  the expected final state.

 

For each expected transition within a test case, the following shall be specified:

-  the starting state;

-  the event which causes transition to the next state;

-  the expected action caused by the transition;

-  the expected next state.

 

Test cases are designed to exercise valid transitions between states.  Test cases may also be designed to test that unspecified transitions cannot be induced.

3.4     Cause-Effect Graphing

3.4.1     Analysis.  Cause-Effect Graphing uses a model of the logical relationships between causes and effects for the component.  Each cause is expressed as a condition, which is either true of false (i.e. a Boolean) on an input, or combination of inputs, to the component.  Each effect is expressed as a Boolean expression representing an outcome, or a combination of outcomes, for the component having occurred.

The model is typically represented as a Boolean graph relating the derived input and output Boolean expressions using the Boolean operators: AND, OR, NAND, NOR, NOT.  From this graph, or otherwise, a decision (binary truth) table representing the logical relationships between causes and effects is produced.

3.4.2     Design.  Test cases shall be designed to exercise rules, which define the relationship between the component's inputs and outputs, where each rule corresponds to a unique possible combination of inputs to the component that have been expressed as Booleans.  For each test case the following shall be identified:

-  Boolean state (i.e. true or false) for each cause;

-  Boolean state for each effect.

 

3.5     Syntax Testing

3.5.1     Analysis.  Syntax Testing uses a model of the formally-defined syntax of the inputs to a component.

The syntax is represented as a number of rules each of which defines the possible means of production of a symbol in terms of sequences of, iterations of, or selections between other symbols.

3.5.2     Design.  Test cases with valid and invalid syntax are designed from the formally defined syntax of the inputs to the component.

Test cases with valid syntax shall be designed to execute options which are derived from rules which shall include those that follow, although additional rules may also be applied where appropriate:

-  whenever a selection is used, an option is derived for each alternative by replacing the selection with that alternative;

-  whenever an iteration is used, at least two options are derived, one with the minimum number of iterated symbols and the other with more than the minimum number of repetitions.

 

A test case may exercise any number of options.  For each test case the following shall be identified:

-  the input(s) to the component;

-  option(s) exercised;

-  the expected outcome of the test case.

 

Test cases with invalid syntax shall be designed as follows:

-  a checklist of generic mutations shall be documented which can be applied to rules or parts of rules in order to generate a part of the input which is invalid;

-  this checklist shall be applied to the syntax to identify specific mutations of the valid input, each of which employs at least one generic mutation;

-  test cases shall be designed to execute specific mutations.

 

For each test case the following shall be identified:

-  the input(s) to the component;

-  the generic mutation(s) used;

-  the syntax element(s) to which the mutation or mutations are applied;

-  the expected outcome of the test case.

 

3.6     Statement Testing

3.6.1     Analysis.  Statement testing uses a model of the source code which identifies statements as either executable or non-executable.

3.6.2     Design.  Test cases shall be designed to exercise executable statements.

For each test case, the following shall be specified:

-  the input(s) to the component;

-  identification of statement(s) to be executed by the test case;

-  the expected outcome of the test case.

 

3.7     Branch/Decision Testing

3.7.1     Analysis.  Branch testing requires a model of the source code which identifies decisions and decision outcomes.  A decision is an executable statement which may transfer control to another statement depending upon the logic of the decision statement.  Typical decisions are found in loops and selections.  Each possible transfer of control is a decision outcome.

3.7.2     Design.  Test cases shall be designed to exercise decision outcomes.

For each test case, the following shall be specified:

-  the input(s) to the component;

-  identification of decision outcome(s) to be executed by the test case;

-  the expected outcome of the test case.

 

3.8     Data Flow Testing

3.8.1     Analysis.  Data Flow Testing uses a model of the interactions between parts of a component connected by the flow of data as well as the flow of control.

Categories are assigned to variable occurrences in the component, where the category identifies the definition or the use of the variable at that point.  Definitions are variable occurrences where a variable is given a new value, and uses are variable occurrences where a variable is not given a new value, although uses can be further distinguished as either data definition P-uses or data definition C-uses.  Data definition P-uses occur in the predicate portion of a decision statement such as while .. do, if .. then .. else, etc.  Data definition C-uses are all others, including variable occurrences in the right hand side of an assignment statement, or an output statement.

The control flow model for the component is derived and the location and category of variable occurrences on it identified.

3.8.2     Design.  Test cases shall be designed to execute control flow paths between definitions and uses of variables in the component.

Each test case shall include:

-   the input(s) to the component;

-   locations of relevant variable definition and use pair(s);

-   control flow subpath(s) to be exercised;

-   the expected outcome of the test case.

 

3.9     Branch Condition Testing

3.9.1 Analysis.  Branch Condition Testing requires a model of the source code which identifies decisions and the individual Boolean operands within the decision conditions. A decision is an executable statement which may transfer control to another statement depending upon the logic of the decision statement. A decision condition is a Boolean expression which is evaluated to determine the outcome of a decision. Typical decisions are found in loops and selections.

3.9.2 Design.    Test cases shall be designed to exercise individual Boolean operand values within decision conditions.

For each test case, the following shall be specified:

-   the input(s) to the component;

-   for each decision evaluated by the test case, identification of the Boolean operand to be exercised by the test case and its value;

-   the expected outcome of the test case.

 

3.10   Branch Condition Combination Testing

3.10.1   Analysis.  Branch Condition Combination Testing requires a model of the source code which identifies decisions and the individual Boolean operands within the decision conditions. A decision is an executable statement which may transfer control to another statement depending upon the logic of the decision statement. A decision condition is a Boolean expression which is evaluated to determine the outcome of a decision. Typical decisions are found in loops and selections.

3.10.2   Design. Test cases shall be designed to exercise combinations of Boolean operand values within decision conditions.

For each test case, the following shall be specified:

-   the input(s) to the component;

-   for each decision evaluated by the test case, identification of the combination of Boolean operands to be exercised by the test case and their values;

-   the expected outcome of the test case.

 

3.11   Modified Condition Decision Testing

3.11.1   Analysis.  Modified Condition Decision Testing requires a model of the source code which identifies decisions, outcomes, and the individual Boolean operands within the decision conditions. A decision is an executable statement which may transfer control to another statement depending upon the logic of the decision statement. A decision condition is a Boolean expression which is evaluated to determine the outcome of a decision. Typical decisions are found in loops and selections.

3.11.2   Design. Test cases shall be designed to demonstrate that Boolean operands within a decision condition can independently affect the outcome of the decision.

For each test case, the following shall be specified:

-   the input(s) to the component;

-   for each decision evaluated by the test case, identification of the combination of Boolean operands to be exercised by the test case, their values, and the outcome of the decision;

-   the expected outcome of the test case

 

3.12   LCSAJ Testing

3.12.1   Analysis.  LCSAJ testing requires a model of the source code which identifies control flow jumps (where control flow does not pass to a sequential statement). An LCSAJ (Linear Code Sequence and Jump) is defined by a triple, conventionally identified by line numbers in a source code listing: the start of the linear code sequence, the end of the linear code sequence, and the target line to which control flow is transferred.

3.12.2   Design.  Test cases shall be designed to exercise LCSAJs.

For each test case, the following shall be specified:

- the input(s) to the component;

- identification of the LCSAJ(s) to be executed by the test case;

- the expected outcome of the test case.

 

3.13   Random Testing

3.13.1   Analysis.  Random Testing uses a model of the input domain of the component that defines the set of all possible input values.  The input distribution (normal, uniform, etc.) to be used in the generation of random input values shall be based on the expected operational distribution of inputs.  Where no knowledge of this operational distribution is available then a uniform input distribution shall be used.

3.13.2   Design.  Test cases shall be chosen randomly from the input domain of the component according to the input distribution.

A test case shall comprise the following:

-     the input(s) to the component;

-     the expected outcome of the test case.

The input distribution used for the test case suite shall also be recorded.

 

3.14   Other Testing Techniques

Other test case design techniques may be used that are not listed in this clause.  Any alternative techniques used shall satisfy these criteria:

a)  The technique shall be available in the public domain and shall be referenced.

b)  The test case design technique shall be documented in the same manner as the other test case design techniques in clause 3.

c)  Associated test measurement techniques may be defined as described in clause 4.13.

 



4     Test Measurement Techniques


In each coverage calculation, a number of coverage items may be infeasible.  A coverage item is defined to be infeasible if it can be demonstrated to be not executable.  The coverage calculation shall be defined as either counting or discounting infeasible items - this choice shall be documented in the test plan.  If a coverage item is discounted justification for its infeasibility shall be documented in the test records.

In each coverage calculation, if there are no coverage items in the component under test, 100% coverage is defined to be achieved by one test case.

4.1     Equivalence Partition Coverage

4.1.1     Coverage Items.  Coverage items are the partitions described by the model (see 3.1.1).

4.1.2     Coverage Calculation.  Coverage is calculated as follows:

4.2     Boundary Value Coverage

4.2.1     Coverage Items.  The coverage items are the boundaries of partitions described by the model (see 3.2.1).  Some partitions may not have an identified boundary, for example, if a numerical partition has a lower but not an upper bound.

4.2.2     Coverage Calculation.  Coverage is calculated as follows:

where a boundary value corresponds to a test case on a boundary or an incremental distance either side of it (see 3.2.2).

4.3     State Transition Coverage

4.3.1     Coverage Items.  Coverage items are sequences of one or more transitions between states on the model (see 3.3.1).

4.3.2     Coverage Calculation.  For single transitions, the coverage metric is the percentage of all valid transitions exercised during test.  This is known as 0-switch coverage.  For n  transitions, the coverage measure is the percentage of all valid sequences of n  transitions exercised during test.  This is known as (n - 1) switch coverage.

4.4     Cause-Effect Coverage

4.4.1     Coverage Items.  Coverage items are rules, where each rule represents a unique possible combination of inputs to the component that have been expressed as Booleans.

4.4.2     Coverage Calculation.  Coverage is calculated as follows:

4.5     Syntax Coverage

No coverage measure is defined for syntax testing.

4.6     Statement Coverage

4.6.1     Coverage Items.  Coverage items are executable statements in the source code.

4.6.2     Coverage Calculation.  Coverage is calculated as follows:

4.7     Branch and Decision Coverage

4.7.1     Branch Coverage Items.  A branch is:

.  a conditional transfer of control from any statement to any other statement in the component;

.  an unconditional transfer of control from any statement to any other statement in the component except the next statement;

.  when a component has more than one entry point, a transfer of control to an entry point of the component.

 

An entry point is either the first statement of the component or any other statement which may be branched to from outside the component.

4.7.2     Branch Coverage Calculation.  Coverage is calculated as follows:

4.7.3     Decision Coverage Items.  Decision Coverage uses the model of the component described for Branch Testing in clause 3.7.1.  Coverage items are decision outcomes.

Decision Coverage is only defined for components with one entry point.

4.7.4     Decision Coverage Calculation.  Coverage is calculated as follows:

4.8     Data Flow Coverage

4.8.1     Coverage Items.  The coverage items are the control flow subpaths from a variable definition to the variable's corresponding p-uses, c-uses, or their combination.

4.8.2     Coverage Calculation.  For the purposes of these coverage calculations a definition-use pair is defined as a simple subpath between a definition of a variable and a use of that variable and coverage is calculated using the formula:

Coverage = (N/T) * 100%, where N and T are defined in the subsequent subclauses.

A simple subpath is a subpath through a component's control flow graph where no parts of the subpath are visited more than necessary.

4.8.2.1  All-definitions.  This measure is defined with respect to the traversal of the set of subpaths from each variable definition to some use (either p-use or c-use) of that definition.

N = Number of exercised definition-use pairs from distinct variable definitions
T = Number of variable definitions

4.8.2.2  All-c-uses.  This measure is defined with respect to the traversal of the set of subpaths from each variable definition to every c-use of that definition.

N = Number of exercised definition-c-use pairs
T = Number of definition-c-use pairs

4.8.2.3  All-p-uses.  This measure is defined with respect to the traversal of the set of subpaths from each variable definition to every p-use of that definition.

N = Number of exercised definition-p-use pairs
T = Number of definition-p-use pairs

4.8.2.4  All-uses.  This measure is defined with respect to the traversal of the set of subpaths from each variable definition to every use (both p-use and c-use) of that definition.

N = Number of exercised definition-use pairs
T = Number of definition-use pairs

4.8.2.5  All-du-paths.  This measure is defined with respect to the traversal of the set of subpaths from each variable definition to every use (both p-use and c-use) of that definition.

N = Number of exercised simple subpaths between definition-use pairs
T = Number of simple subpaths between definition-use pairs

4.9     Branch Condition Coverage

4.9.1     Coverage Items.  Branch Condition Coverage uses a model of the component described in clause 3.9.1. Coverage items are Boolean operand values within decision conditions.

Branch Condition Coverage is only defined for components with one entry point.

4.9.2     Coverage Calculation.  Coverage is calculated as follows:

4.10   Branch Condition Combination Coverage

4.10.1   Coverage Items.  Branch Condition Combination Coverage uses a model of the component described in clause 3.10.1. Coverage items are unique combinations of the set of Boolean operand values within each decision condition.

Branch Condition Combination Coverage is only defined for components with one entry point.

4.10.2   Coverage Calculation.  Coverage is calculated as follows:

Branch Condition Combination Coverage

4.11   Modified Condition Decision Coverage

4.11.1   Coverage Items.  Modified Condition Decision Coverage uses a model of the component described in clause 3.11.1.  Coverage items are Boolean operand values within decision conditions.

Modified Condition Decision Coverage is only defined for components with one entry point.

4.11.2   Coverage Calculation.  Coverage is calculated as follows:

Modified Condition Decision Coverage

4.12   LCSAJ Coverage

4.12.1   Coverage Items.  Coverage items are LCSAJs for the component (see 3.12.1).

4.12.2   Coverage Calculation  Coverage is calculated as follows:

4.13   Random Testing

No coverage measure is defined for random testing.

 

4.14   Other Test Measurement Techniques

Other test measurement techniques may be used that are not listed in this clause.  Any alternative techniques used shall satisfy these criteria:

a)  The technique shall be available in the public domain and shall be referenced.

b)  The test measurement technique shall be documented in the same manner as the other test measurement techniques in clause 4.

c)  Associated test case design techniques may be defined as described in clause 3.13.



© RuleWorks - All Rights Reserved - Policy - - Sitemap