banner



How To Calculate Defect Removal Efficiency

In my last post Essential testing metrics "Defect Removal Efficiency (DRE)" was identified as the most important measure of testing quality. Defect Removal Efficiency relates to the power to remove defects introduced to a organisation by a project during the projection life cycle.

At its simplest DRE tin be expressed as a percentage where DRE = (total defects plant during the project/full defects introduced past the projection)x 100

How do we make up one's mind the total number of defects within a solution?

One of nearly significant challenges in calculating a DRE percentage is determining the total number of defects introduced to the system past the project, there are a number of ways how this can be adamant –

  1. Defect Seeding – This is where defects are deliberately placed into code in lodge to decide the effectiveness of a testing programme. Defect Seeding has been used in a number of academic studies but is rarely used in existent globe testing where timeframes and budgets are often already stretched, creating representative defects is a hard and fourth dimension consuming activity.

    The total number of defects in a awarding can be extrapolated to = total defects found during testing ten (total defects seeded/total seeded defects detected)

  1. Defect Estimation – This involves estimating the number of defects inside a system based on previous deliverables and industry experience.  This is a technique that would be unlikely give a truly accurate defect count, and volition be of more value equally an input into the initial Test Planning estimates.
  1. Defect Counting – This involves combining the "number of defects found during testing" with the "number of defects identified in Product traced dorsum to the project".  This method volition e'er requite a lower value than the true number of defects introduced by the project as –
    1. Not all defects manifest themselves equally failures in Production
    2. Corrective and Usability related defects are often less likely to be raised in Product.
    3. In society for the DRE metrics to be useful they should be derived as close as possible to when the organization exited exam, nevertheless many functions may non be executed in Product until the system has been live for a number of years.

What I would recommend in most instances would be using "Defect Counting" and cutting off the Production Defect count after the organization has been live for three months. This should be sufficient for the majority of significant issues to be identified, while however being providing the data within a relevant timeframe.

Normalising the defect information

In club to be constructive it is important that defects are consistently raised and classified beyond the testing life cycle and production, this means –

  • Ensuring that defect severities beyond test are applied according to the definitions specified in the examination strategy
  • If necessary reclassifying Production defect priorities to ensure that  they are consistent with defects raised in examination
  • Analysing whatever Production Modify Requests to run across if they are actually addressing defects
  • Often there tin be a delay in raising Production defects so there is value is talking to some of the key System Users to identify any problems that they have observed but not yet formally logged.

Where were the defects introduced?

In social club to measure the effectiveness of specific exam teams or test phases it is necessary to determine where within the project life system defects were introduced, this requires a  level of root cause analysis as to the likely cause of each defect. Defects are usually  classified as being introduced in the post-obit areas –

  • Requirements
  • Design
  • Build
  • Deployment to Production

For an iterative project information technology is also skilful exercise to record the iteration in which the defects were introduced.

Where should defects be identified?

The 5-model provides a good guide as to where within the Project Life cycle dissimilar classes of defects should be identified. I would normally employ the following criteria:

Defect introduced

Defect characteristics

Stage where defect should be identified

Requirements Phase

Requirements Related

Requirements inspection

Design Phase

Design Related

Design Inspection

Build Stage

Functional defect - within a code component or between related code components.

Unit Examination

Build Stage

Integration between components within an application

Integration Exam

Build Phase

Functional defects / standards / usability

Arrangement Test

Build Stage

Non-Functional defects

Non-Functional Exam Phases

Requirements and Pattern Phase

Business process defects

Acceptance Testing

Deployment Defects

north/a

Post-Deployment Testing

For an iterative project defects should be identified during the phase in which they were introduced.

Calculating DRE for specific Testing and Inspection Phases

A "stage specific adding of DRE" can exist documented as "total number of defects found during a item phase / full of number of defects in the application at the start of the stage".

Some basic rules that should be applied –

  1. Most projects are delivered in a iterative fashion
    1. A examination phase can only find defects that are actually in the solution when executed
    2. A test phase for a item iteration however should still consider defects introduced past previous phases(Unit of measurement Test is an exception to this rule as information technology is usually phase specific)
  2. The non-functional test stage should only exist expected to find not-functional defects inside their area of focus(plainly knowledgeable  not-functional testers may notice some functional defects, withal this is not the prime purpose for their testing and should non count towards their DRE calculation)
  3. Functional examination phases should not be expected to find not-functional defects.
  4. Functional exam phases follow a solution maturity level as unsaid past the v-model; less mature test phases should not be expected to find defects belonging to college phases (i.e. unit exam would not be expected to discover business concern process defects)

Example Formula

Phase Specific DRE

This measures how effective a test stage is at identifying defects that it is designed to capture

  1. DRE Requirements Inspection = (number of requirements related defects identified during requirements inspection/full number of requirements defects identified within the solution)
  2. DRE Design Inspection = (number of design and requirement related defects identified during design inspection/total number of pattern and requirement defects identified inside the solution)
  3. DRE Unit Test = (number of unit test defects identified during unit exam/total number of unit examination defects identified within the solution)*
  4. DRE Integration Exam = (number of integration defects identified during integration test/(total number of integration test defects identified within the solution post-unit exam )
  5. DRE System Test = (number of arrangement test defects identified during organization test)/(total number of system examination defects identified within the solution post-Integration test )
  6. DRE Acceptance Test = (number of acceptance examination defects identified during credence exam)/(total number of credence test defects identified inside the solution mail service system test )

*Every bit unit test is frequently an informally recorded testing action this metric may non be able to be derived in which case other development quality metrics such as "defects/line of lawmaking" could be applied.

Overall DRE

This measures how constructive a test stage is in capturing any balance defects within the application irrespective for the phase that should have caught them. (As an example Acceptance Testing is not specifically trying to find Unit Test defects, withal a thorough testing plan will cover many paths through the functionality and should identify missed defects from other phases).

  1. Overall DRE System Test = (number of defects identified during organization test)/(total number of functional defects identified inside the solution post-Integration test)
  2. Overall DRE Acceptance Examination = (number of defects identified during credence test)/(total number of functional defects identified within the solution post-system test)

What is a good DRE Score?

An average DRE score is usually effectually 85% beyond a full testing programme, withal with a thorough and comprehensive requirements and design inspection process this can be expected to elevator to around 95%.

Free recorded webinar: Testing is key to your agile software development success

Source: https://www.equinox.co.nz/blog/software-testing-metrics-defect-removal-efficiency

0 Response to "How To Calculate Defect Removal Efficiency"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel