Fundamental
rule. Do not complicate the numbers. The more you complicate, the more
will be the confusion. We need to start with some basic numbers that
reflect speed of testing, coverage of testing, efficiency of testing. If
all these indicators move up, we can definitely be confident that the
testing efficiency is getting better and better.
Every number collected from projects, help to fine tune the processes in that project and across the company itself.
Test planning rate (TPR).
TPR = Total number of test cases planned / total person-hours spent on
planning. This number indicates how fast the testing team thinks,
articulates the tests and documents the tests.
Test execution rate (TER).
TER = Total number of test cases executed / total person-hours spent on
execution. This indicates the speed of testers in executing the same.
Requirements coverage (RC).
Ideal goal is 100% coverage. But it is very tough to say how many test
cases will cover 100% of requirements. But there is a simple range you
mus assume. If we test each requirement in just 2 different ways - 1
positive and 1 negative, we need 2N number of test cases, where N is the
number of distinct requirements. On an average, most of the commercial
app requirements can be done with 8N test cases. So, the chances of
achieving 100% coverage is high if you try to test every requirement in 8
different ways. Not all requirements may need an eight-way approach.
Planning Miss (PM).
PM = Number of adhoc test cases that are framed at the time of
execution / Number of test cases planned before execution. This
indicates, whether the testers are able to plan the tests based on the
documentation and understanding levels. This number must be as less as
possible, but it is very difficult to achieve zero level in this.
Bug Dispute Rate (BDR).
BDR = Number of bugs rejected by development team / Number of total
bugs posted by testing team. A high number here leads to unwanted
arguments between the two teams.
There is a set
of metrics that reflect the efficiency of the development team, based on
the bugs found by the testing team. Those metrics do not really reflect
the efficiency of the testing team; but without testing team, those
metrics cannot be calculated. Here are a few of those.
Bug Fix Rate (BFR).
BFR = Total number of hours spent on fixing bugs / total number of bugs
fixed by dev team. This indicates the speed of developers in fixing
the bugs.
Number of re-opened bugs.
This absolute number is an indicator of how many potential bad-fixes or
regression effects are injected into the application, by the
development team. Ideal goal is zero for this.
Bug Bounce Chart (BBC).
BBC is not just a number, but a line chart. On the X axis, we need to
plot the build numbers in sequence. Y axis contains how many New+ReOpen
bugs are found in each build. Ideally this graph must keep dropping
towards zero, as quickly as possible. But if we see a swinging pattern,
like sinusoidal wave, it indicates, new bugs are getting injected build
over build, due to regression effects. After code-freeze, product
companies must keep a keen watch on this chart.
Resource: QAmonitor
No comments:
Post a Comment