The Test Management Guide Software Test Guide RSS Feed
*
Accessibility

Background

BS7925-2 Annex A - Process Guidelines


Annex A
(informative)

Process Guidelines

The purpose of this annex is to provide guidance on the requirements specified in clause 2 of this Standard.

These guidelines are supported by the use of example documentation from a fictitious project, named EX. The example documentation provides a basic sample of the full set of documentation which conforms to this Standard. The technical content of the example documentation merely serves to illustrate one interpretation of this Standard and does not suggest that this interpretation is to be preferred over any other. Any set of documentation which meets the requirements of clause 2 of this Standard is acceptable.

The test documentation examples included are:

Component Test Strategy;

Project Component Test Plan;

Component Test Plan;

Component Test Specification;

Component Test Report.

The relationships between the different forms of example test documentation can be seen in the following Document Hierarchy:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


The Document Hierarchy shows that there is a single Component Test Strategy for Project EX (in this example documented within the project's Quality Plan) and a single Project Component Test Plan for the project. For each component there is a Component Test Plan and corresponding Component Test Specification and Component Test Report.

A.1 Pre-Requisites

Clause 2 of the Standard specifies requirements for the overall test process including component testing. The prerequisites for component testing are an overall Component Test Strategy and a project specific Project Component Test Plan.

A.1.1 Component Test Strategy

A.1.1.1 To comply with this Standard, component testing must be carried out within a pre-defined documented strategy. Many organisations will achieve this through a corporate or divisional Testing Manual or Quality Manual. In other cases, the Component Test Strategy could be defined for one project only and may be presented as part of a specific project Quality Plan.

A.1.1.2 This Standard does not prescribe that the component test strategy documentation need be a single document or a whole document. In the case where a Component Test Strategy is tailored to a specific project the component test strategy documentation could be incorporated into the project component test planning documentation.

A.1.1.3 This Standard requires that the Component Test Strategy is defined in advance but does not prescribe the format of component test strategy documentation.

A.1.1.4 As part of the test strategy, this Standard defines a generic test process comprising a series of test activities and requires this sequence of activities to be followed for a particular test case. This allows, for instance, incremental approaches such as planning, executing and recording the tests for one set of components before another; or planning, executing and recording the application of one test case design technique before another.

A.1.1.5 Further requirements on the management of the testing are given in ISO-9001. Guidance on setting up a test process is available in [IEEE 1008].

A1.1.6 The form of the test completion criterion will typically be a target for test coverage.

A.1.1.7 The following example provides a Component Test Strategy for Project EX developing the EX system. For the purposes of the example, the architectural design process decomposes the system into functional areas each containing a number of components. The example Component Test Strategy is provided as a section in the project's Quality Plan.

 


(Example)

 

Project EX Quality Plan

Section 6

Component Test Strategy

This section provides the component test strategy for Project EX. Any exceptions to this strategy (for example: omitted or additional design or measurement techniques) must be documented in the Component Test Plan.

Exceptions to this strategy must be approved by the Project Manager and the Quality Manager. The use of any additional design or measurement techniques, where appropriate, is encouraged and can be done without specific approval.

1.                        Design Techniques: Component tests shall be designed using the following techniques:

-                Equivalence Partitioning (EP, Std 3.1);

-                Boundary Value Analysis (BVA, Std 3.2);

-                Decision Testing (DT, Std 3.7).

Rationale: EP and BVA have proven cost-effective on previous releases. DT shall be used to complete coverage should EP and BVA fail to meet the coverage completion criteria.

2.                        Completion criteria: A minimum of 100% Equivalence Partition Coverage (EPC, Std 4.1), 100% Boundary Value Coverage (BVC, Std 4.2) and 90% Decision Coverage (DC, Std 4.7, 4.7.3 & 4.7.4) shall be achieved for each component test.

Rationale: Equivalence Partition and Boundary Value Coverage are used to ensure full application of the corresponding test case design techniques, while Decision coverage provides a useful white box check on the overall set of test cases.

3.                        Independence: No independence is required in the production of test plans and test specifications, or in the implementation of tests. All test documentation including test records must be reviewed by the member of the project to whom the originator of the work reports.

4.                        Approach: Components shall be tested in isolation using stubs and drivers in place of interfacing components.

5.                        Environment: Component tests shall be run in the developer's own workspace. All output shall be logged to file. The Coverage Tool(1) shall be used to analyse the test coverage of each test run.

6.                        Error Correction: Wherever possible, test cases for a component should be executed together in a single test run. The outcome of all test cases should then be analysed. If corrections are needed then the entire component test should be repeated.

7.                        Project Component Test Plan: A Project Component Test Plan shall be produced following completion of the architectural design of the EX system (Std 2.1.2).

8.                        Component Test Process: Items 10 to 14 below detail the component test process.

9.                        Component Test Plan: Individual component tests will be planned following completion of the design for the functional area which a related group of components comprise.

A Component Test Plan can be for a single component or a group of related components which comprise a functional area of the system. The Component Test Plan shall list the test case design techniques to be used when specifying tests for the component, the measurement techniques to be used when executing tests, stubs, drivers and specific test completion criteria.

Exceptions to this Component Test Strategy shall be explicitly identified and approved where approval is required by this Component Test Strategy.

Inputs: Architectural Design, Detailed Design, Project Component Test Plan, Quality Plan;

Outputs: Component Test Plan.

10.                     Component Test Specification: The component test specification shall provide a list of test cases annotated with the associated design elements which each test case exercises. This will help the related measurement criteria to be assessed.

Prior to test execution, the test specification shall be reviewed for completeness.

Inputs: Architectural Design, Detailed Design, Component Test Plan;

Outputs: Component Test Specification.

11.                     Component Test Execution: Test execution shall be prepared by coding the drivers and stubs specified in the test plan, compiling and linking with the component under test. The test driver shall include code to control tests and create a log file. Tests are then run.

The Coverage Tool(1) shall be used during test execution to analyse test coverage.

The objective of test execution shall be to execute all specified test cases. Test execution shall complete when either all test cases have been executed or an error is encountered which prevents continuation. Test execution should not be halted for minor errors (the error should be recorded and test execution continued).

Inputs: Component Test Plan, Component Test Specification, Component Code;

Outputs: Test outcomes, Test log file, Coverage analysis report, stub and driver code.

12.                     Component Test Recording: The test log file shall be examined by the tester to compare actual test outcomes with expected test outcomes. Differences shall be investigated. Any due to software or test errors will raise a fault report. Where a test is incomplete, this shall cause a regression in the test process. Where applicable, correction shall be made to the component design and/or code as necessary to correct for fault reports raised during the Component Test Recording activity. The test process shall then be repeated.

A Component Test Record shall be produced each time a test is run, containing the version of component under test, version of the Component Test Specification, date and time of test, the number of test cases run, number of test discrepancies, coverage measurements and cross-references to any fault reports raised.

Inputs: Test Plan, Test Specification, Test log file, Coverage analysis report, Component Code;

Outputs: Component Test Record, Fault Reports.

13.                     Checking for Component Test Completion: The Component Test Record shall be marked to show whether the overall test has passed and completion criteria have been met.

Inputs: Component Test Plan (specifying completion criteria), Component Test Record, Fault Reports;

Outputs: Component Test Record (pass/ fail).

Where coverage has not been achieved, the Component Test Specification will normally be extended with further test cases until the required coverage level is achieved. Exceptionally, with the approval of the Project Manager and the Quality Manager, the completion criteria in the Component Test Plan may be changed to the achieved level.

 

 

A.1.1.9 Notes:

(1) For the purposes of this example, a generic test coverage measurement tool has been used called "Coverage Tool". In practice, a test strategy would explicitly give the name of any tool used.

A.1.2 Project Component Test Planning

A.1.2.1 This Standard does not prescribe that the Project Component Test Plan documentation need be a single document or a whole document.

A.1.2.2 As a minimum, the Project Component Test Plan should identify any specific adaptations of the component test strategy and to specify any dependencies between component tests.

A.1.2.3 Further guidance on test planning is available in [IEEE 829].

A.1.2.4 The following example provides a Project Component Test Plan for project EX. Strictly speaking, there are no dependencies between component tests because all components are tested in isolation. The hypothetical project EX desires to begin the integration of tested components before all component testing is complete. Thus the sequence of component testing is driven by the requirements of integration testing.

(Example)

Project EX

Project Component Test Plan

1.                        Dependencies: Strict adherence to the Component Test Strategy (isolation testing) removes any dependencies between component tests. Nevertheless, consideration for subsequent integration means that it will be most convenient to complete the component testing of some parts of the system before that of other parts.

The approach selected is to implement the kernel of each functional area of the system so that a minimal working thread can be established as early as possible.

The component test sequence will be:

-                          LOG 1 - LOG 6;

-                          REM 1 - REM 6 (timing critical);

-                          RES 1 - RES 4;

-                          MIS 1 - MIS 5.

Integration testing can now start in parallel with the remaining component tests. Remaining components will be tested in the sequence:

-                          RES 5 - RES 8;

-                          MIS 6 - MIS 10;

-                          LOG 7 - LOG 12;

-                          MIS 11 - MIS 20.

 

 

A.2 Component Test Planning

A.2.1 This Standard does not prescribe that the Component Test Plan documentation need be a single document or a whole document.

A.2.2 The form of the test completion criterion is not mandated. This will typically be a test coverage target which has been mandated by the Component Test Strategy.

A.2.3 Further guidance on test planning is available in [IEEE 829].

A.2.4 An example test plan for component LOG 3 of project EX is shown below. This could be contained in the same physical document as the previous document, the Project Component Test Plan for all components.

 

(Example)

Project EX

Component Test Plan

for

Component LOG 3

1.                        Design techniques: EP (Std 3.1), BVA (Std 3.2), DT (Std 3.7).

2.                        Measurement Techniques: EPC (Std 4.1), BVC (Std 4.2) and DC (Std 4.7.3, 4.7.4).

3.                        Completion Criteria:

-                          EPC 100% (Std 4.1);

-                          BVC 100% (Std 4.2);

-                          DC 100% (Std 4.7.3, 4.7.4).

Note: the Component Test Strategy requires only 90% Decision coverage, but this is a critical component(1).

4.                        Stubs: for components LOG 4 and LOG 5 (which are called by LOG 3)(2).

5.                        Driver: in place of component LOG 1 (which calls LOG 3)(2).

6.                        Exceptions: The raised completion criteria of 100% decision coverage does not require approval(1).

 

Exception approval - not applicable.

 

 

A.2.5 Notes:

(1)                The example Test Strategy has stated that any additional coverage does not require explicit approval.

(2)                The calling tree for functional area "LOG" is that "LOG 1" calls "LOG 2" and "LOG 3". "LOG 3" calls "LOG 4" and "LOG 5". As "LOG 2" is not called by "LOG 3", it is not required to test "LOG 3"

A.3 Test Specification

A.3.1 The test specification for a component may be in machine-readable form; some test automation tools employ scripting languages that may meet the requirements of this Standard.

A.3.2 An example Test Specification for component LOG 3 in project EX is given below. Note that the specification has been extended (version 2) to cater for gaps in the coverage of the initial specification and errors as reported in the test record (A.5).

(Example)

 

Project EX

Component Test Specification

for

Component LOG 3 (Issue A)(1)

 

 

Test design table

Test Objective: Input Validation

Initial state of component for each test case: start of new entry

 

 

 

 

 

 

 

 

Conditions

 

 

 

Valid

Classes

Tag

Invalid

Classes

Tag

Valid Boundaries

Tag

Invalid Boundaries

Tag

 

 

Inputs

1-7

V1

< 1

X1

1

VB1

0

XB1

 

 

 

 

 

> 7

X2

7

VB2

8

XB2

 

 

 

 

 

non-digit

X3

 

 

 

 

 

 

Outcomes

Beginner (1-5)

V2

 

 

5

VB3

 

 

 

 

 

Advanced(6-7)

V3

 

 

6

VB4

 

 

 

 

 

 

Test cases (2)

 

Test Case

Input Values

Expected Outcome

New Conditions covered (cross referenced to "Tag" in Test Design Table")

 

 

1

3

Beginner

V1, V2

 

 

2

7

Advanced

V3, VB2

 

 

3

1

Beginner

VB1

 

 

4

0

Error message

X1, XB1

 

 

5

9

Error message

X2

 

 

6

8

Error message

XB2

 

 

7

A

Error message

X3

 

 

8

5

Beginner

VB3

 

 

9

6

Advanced

VB4

 

 

 

Additional/Altered Test cases (Test Specification Version 2) (3)

 

Test

Case

Description

Expected Outcome

Reason

 

 

2

7

Experienced

Test fault

 

 

9

6

Experienced

Test fault

 

 

10

<return> (no input)

Re-display entry screen

Decision Coverage

 

 

 

 

 

A.3.3 Notes:

(1)                The tests are for Issue A of component LOG 3, a component which has not yet been issued, but will be given Issue A status when it has passed its component test.

(2)                Test cases have covered all tags in the Test Design Table with at least one test case, so we have 100% coverage of partitions and boundaries.

(3)                When the test cases were executed (see example Component Test Record in A.5.3), two test cases failed due to errors in the test case (2, 9), where the wrong term for the expected outcome ("Advanced" instead of "Experienced") was used. Test specification version 2 corrected these test cases and added one further test case to achieve 100% decision coverage.

A.4 Test Execution

A.4.1 Test execution and coverage measurement are often performed by tools. This Standard does not prescribe how such tools work.

A.5 Test Recording

A.5.1 In the following example Test Report, component LOG 3 of project EX fails the first execution of tests 1 to 9. Test cases 2 and 9 fail due to errors in the test specification. These are corrected and a further test case is specified to complete decision coverage.

A.5.2 After recording results each outcome is classified. Typically, one of the following situations will be recorded:

a) A test has been incorrectly performed and must be repeated;

b) A test has detected a fault in the component; the component must be changed, and the test repeated;

c) A test has detected a fault in the specification of the component; the specification of the component, the component and the test specification (possibly) must be changed, and the test repeated;

d) A test has detected a fault in the test specification; the test specification must be changed, and the test repeated;

e) A test executes and produces the expected outcome; further test cases are performed, if any remain.

 

A.5.3 It is possible that there is insufficient information to make the classification at this point in the process. In this instance, it may be necessary to extend the test specification and prepare and execute more test cases before deciding what to do. A record of all decisions made should be documented as part of the Test Report.

 

(Example)

Project EX

Component Test Report

for

Component LOG 3 (Issue A)(1)

 

Date: 17/1/1997 by D. Tester

Component: LOG 3, Issue A(1)

Component Test Specification LOG 3 Version 1

 

Test cases executed: 1, 2, 3, 4, 5, 6, 7, 8, 9.

Failed Test cases

 

Test

Case

Expected Outcome

Actual Outcome

 

 

2

Advanced

Experienced

 

 

9

Advanced

Experienced

 

Fault report TFR23 raised. Test cases 2 and 9 state incorrect expected outcome.

Decision Coverage achieved: 83%

 

 

Date: 20/1/1997 by D. Tester

Component: LOG 3, Issue A(1)

Component Test Specification LOG 1 Version 2(2)

Test cases executed: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10.

All test cases matched expected outcome

Decision Coverage achieved: 100%

Test passed

 

 

A.5.4 Notes:

(1)                The tests are for Issue A of component "LOG 3", a component which has not yet been issued, but will be given Issue A status when it has passed its component test.

(2)                When the test cases were executed, two test cases failed due to errors in the test cases (2, 9). Test specification version 2 corrected these test cases (see example Test Specification A.4.2) and added one further test case to achieve 100% decision coverage.

 

A.6 Checking for Test Completion

A.6.1 A check against the test completion criteria is mandatory at this point. If the criteria are not met, normally additional tests shall be required, alternatively, by allowing iteration of Test Planning, the Standard implicitly permits the relaxation (or strengthening) of test completion criteria. Any changes to the test completion criteria should be documented.

A.6.2 If all test completion criteria are met then the component is released for integration.




© RuleWorks - All Rights Reserved - Policy - - Sitemap