Dec 14, 2012

Test cases/Scenarios For Web Site Cookie Testing:


1) Verified that on Sensitive and Personal data is stored In cookies.

2)Verified that if any personal data is stored in cookies it should be stored in encrypted format.

3) Verified  that there is no overuse of cookies on your site under test. Overuse of cookies will annoy users if browser is prompting for cookies more often and this could result in loss of site traffic and eventually loss of business.

4) Verified that If you are using cookies on your site, your sites major functionality will not work by disabling the cookies.  There should not be any page crash due to disabling the cookies. (Please make sure that you close all browsers, delete all previously written cookies before performing this test)

5) Verified that on Disabling the cookies appropriate messages Should be displayed to user like “For smooth functioning of this site make sure that cookies are enabled on your browser” while navigate through Site.

6) Verified that there should not be any page crash due to disabling the cookies.
Note:Please make sure that you close all browsers, delete all previously written cookies before performing this test)

7) Verified that your web application page is writing the cookies properly on different browsers as intended and site works properly using these cookies. You can test your web application on Major used browsers like Internet explorer (Various versions), Mozilla Firefox, Netscape, Opera etc.

8) Verified that cookies written by one domain can not accessed by another browser.

9) Verified that Corrupted cookies can not be accessible by other domain.

Note: Corrupting cookie is easy. You know where cookies are stored. Manually edit the cookie in notepad and change the parameters to some vague values. Like alter the cookie content, Name of the cookie or expiry date of the cookie and see the site functionality. In some cases corrupted cookies allow to read the data inside it for any other domain. This should not happen in case of your web site cookies.

10)  Accepts/Reject some cookies: The best way to check web site functionality is, not to accept all cookies. If you are writing 10 cookies in your web application then randomly accept some cookies say accept 5 and reject 5 cookies. For executing this test case you can set browser options to prompt whenever cookie is being written to disk. On this prompt window you can either accept or reject cookie. Try to access major functionality of web site. See if pages are getting crashed or data is getting corrupted.

11) Delete cookie: Allow site to write the cookies and then close all browsers and manually delete all cookies for web site under test. Access the web pages and check the behavior of the pages.

12) Checking the deletion of cookies from your web application page: Some times cookie written by domain say rediff.com may be deleted by same domain but by different page under that domain. This is the general case if you are testing some ‘action tracking’ web portal. Action tracking or purchase tracking pixel is placed on the action web page and when any action or purchase occurs by user the cookie written on disk get deleted to avoid multiple action logging from same cookie. Check if reaching to your action or purchase page deletes the cookie properly and no more invalid actions or purchase get logged from same user.

13) If your web application is using cookies to maintain the logging state of any user then log in to your web application using some username and password. In many cases you can see the logged in user ID parameter directly in browser address bar. Change this parameter to different value says if previous user ID is 456 then make it 452 and press enter. The proper access message should be displayed to user and user should not be able to see other users account.

14) In case of online shopping portal testing ,Verified that when user reach to final order summary page,cookie of previous page  i.e. shopping cart page should be deleted properly.

15) Verified that credit card number should not be stored in cookies not even in encrypted form.

Resource:T4Testing

Different methods to locate UI Elements (WebElements) or Object Recognize Methods:


There are different methods to locate UI Elements (WebElements) or to recognize objects which are as follows:

· By ID
· By Tag Name
· By Class Name
· By Link Text
· By Name
· By Partial Link Text
· By XPATH

By ID:
This method is most efficient and mostly used to locate the elements. Every element has its unique id.
Let’s take the example for Google, suppose we have to find/locate the search box of goggle. Use firebug option of Mozilla Firefox with which you can easily locate the element in HTML format. Please refer following screenshot:

<td id=”gs_tti0″ class=”gsib_a”>
Id of Search box of goggle is gs_tti0.
Now Example for how to find an element via using ID:
WebElement element = driver.findElement(By.id(“gs_tti0″));

 By Tag Name:
To find tag name of a use fire bug.
Let’s take the example for Google, suppose we have to find/locate the “Google Search” button of goggle via tag name. Use firebug option of Mozilla Firefox with which you can easily locate the element in HTML format. Please refer following screenshot:

<input type=”submit” onclick=”this.checked=1″ name=”btnK” value=”Google Search”/>
Tag Name of Google Search” button of goggle is input here.
Now Example for how to find an element via using TagName:
WebElement element = driver.findElement(By.tagName("input"));

By Class Name:

There may be many elements that are associated with same class name so finding multiple elements becomes the more practical option over finding the first element.
Example of how to find an element that looks like this:
<div class=”tsf-p” style=”position:relative”>
Here tsf-p is classname.
List<WebElement> element = driver.findElements(By.className("tsf-p "))
Remember that use findElements method instead of findElement to find multiple elements.
 By Link Text:
By using By Link Text method of class “By “you can find link element with matching visible text.
Let’s take the example for Google, suppose you want to find/locate the “Advertising Programs” link of goggle via Link Text.
Now Example for how to find an element i.e. link via using linkText:
WebElement element = driver.findElement(By.linkText("Advertising Programs"));

By Name:

By using By Name method of class “By “you can find link element via name attribute of element.
Example of how to find an element that looks like this:
<input type=”submit” onclick=”this.checked=1″ name=”btnK” value=”Google Search”>
Here” btnK” is name.
List<WebElement> element = driver.findElements(By.name("btnK "))

By Partial Link Text:

By using “BypartialLinkText” method of class “By “you can find link element with Partial matching visible text.
Let’s take the example for Google, suppose you want to find/locate the “Advertising Programs” link of goggle via BypartialLinkText. For this you can use:
WebElement element = driver.findElement(By. partialLinkText ("Advertising"));
 
 By XPATH:
Xpath is a locator, it is an unique address which identifies each and every element.WebDriver uses a browser’s native XPath capabilities wherever possible. To find xpath of an element use firebug and fire path adons on Mozilla Firefox.
Let’s take the example for Google, suppose we have to find/locate the search box of goggle via using xpath. Use firebug option of Mozilla Firefox with which you can easily locate the element in HTML format and copy the xpath from xpath bar of firepath tab.Please refer following screenshot:

Now Example for how to find an element via using xpath:
WebElement element = driver.findElement(By.xpath("//*[@id='gs_tti0']"));
 Note: Don’t copy the dot from xpath bar.

Dec 9, 2012

Metrics for Software Testing: Managing with Facts Product Metrics

Introduction

In the previous article in this series, we moved from a discussion of process metrics to a
discussion of how metrics can help you manage projects. I talked about the use of project metrics
to understand the progress of testing on a project, and how to use those metrics to respond and
guide the project to the best possible outcome. We looked at the way to use project metrics, and
how to avoid the misuse of these metrics.

In this final article in the series, we’ll look at one more type of metric. In this article, we examine
product metrics. Product metrics are often forgotten, but having good product metrics helps you
understand the quality status of the system under test. This article will help you understand how
to use product metrics properly. I’ll also offer some concluding thoughts on the proper use of
metrics in testing, as I wind up this series of articles.
The Uses of Product Metrics
As I wrote above, product metrics help us understand the current quality status of the system
under testing. Good testing allows us to measure the quality and the quality risk in a system, but
we need proper product metrics to capture those measures. These product metrics provide the
insights to guide where product improvements should occur, if the quality is not where it should
be (e.g., given the current point on the schedule). As mentioned in the first article in this series,
we can talk about metrics as relevant to effectiveness, efficiency, and elegance.
Effectiveness product metrics measure the extent to which the product is achieving desired levels
of quality. Efficiency product metrics measure the extent to which a product achieves that desired
level of quality results in an economical fashion. Elegance product metrics measure the extent to
which a product effectively and efficiently achieves those results in a graceful, well-executed
fashion.
In spite of the usefulness of product metrics, these are not always measured during testing. All
too often, test professionals rely on project metrics. As discussed in the previous article, project
metrics can tell us whether the testing is on target to achieve the plan. However, they cannot
reliably tell us about the quality of the system.
Suppose that I give you the following information:
· 95% of the tests have been run.
· 90% of the tests have passed.
· 5% of the tests have failed.
· 4% of the tests are ready to run.
· 1% of the tests are blocked.
Assume further that we are right on schedule in terms of the test execution schedule. Does this
tell us good news or bad news? It’s not possible to say. If the 10% of the tests which are failed,
queued up for execution, or blocked are relatively unimportant, and the defects associated with
the 5% of failed tests are also unimportant, this could be good news. However, if some of those
tests relate to important test conditions, or if the defects associated with the failed tests relate to
key quality risks for the product, then we could be in serious trouble, especially given that very
little time remains.
So, we need product metrics to tell us, throughout test execution, whether the quality is on track
for a successful delivery. After release of the software, we need product metrics to verify the
conclusions we reached during test execution. We need to include those product metrics in our
testing dashboards. This allows us to balance our project metrics, as discussed in the previous
article, giving ourselves and other project participants and stakeholders a complete view of
project status.
Testing product metrics are typically focused on the quality of the system under test. So, it’s
important to remember—and to remind everyone who sees these metrics—that the role of testing
is to measure quality, not to directly improve quality. The product metrics reflect the totality of
the software process’ quality capabilities and the entire project team’s efforts towards quality.
The software process and its quality capabilities are largely determined by management, and the
test team cannot control the behavior of others on the team with respect to quality, so product
metrics measure attributes outside the control of testing. So, as I’ve mentioned throughout this
series, such metrics should not be used to reward or punish teams or individuals, especially the
testers. If these product metrics are misused, people will find ways to distort the metrics in order
to maximize rewards or minimize punishments. The product metrics will then give a distorted—
typically an overly optimistic—view of product quality status, robbing the project team of the
ability to manage the quality of the product. Disastrous release decisions may ensue.
Developing Good Product Metrics
In the first article in this series, I explained a process for defining good metrics. Let’s review that
process here, with an emphasis on product metrics:
1. Define test coverage and quality-related objectives for the product.
2. Consider questions about the effectiveness, efficiency, and elegance with which the
product is achieving those objectives.
3. Devise measurable metrics, either direct or surrogate, for each effectiveness, efficiency,
and elegance question.
4. Determine realistic goals for each metric, such that we can have a high level of
confidence in the quality and test coverage for the product prior to release.
5. Monitor progress towards those goals, determining product status, and making test and
project control decisions as needed to optimize product quality and test coverage
outcomes.
The typical objectives for test coverage and quality for a product vary, but often include ensuring
complete coverage of the requirements.1 A relevant effectiveness question here is: Have we built
1 Analytical requirements based test strategies have their strengths and weaknesses,
and are best used in a blend with other test strategies such as risk based and reactive. See
my book Advanced Software Testing: Volume 2 for more information on test strategies.
a system that fulfills the specified requirements? Let’s look at a metric related to requirements
coverage to illustrate the process.
When using an analytical requirements based test strategy, every requirement should have one or
more tests created for it during test design and implementation, with bi-directional traceability
maintained between the tests and the requirements so that complete coverage can be assured.
During test execution, we run these tests, and we can report the results in terms of requirements
fulfilled (i.e., all the tests pass) and unfulfilled (i.e., one or more tests fail) using the traceability.
What are realistic goals for requirements testing that will ensure adequate quality and test
coverage? When following a requirements based test strategy, every requirement should be tested
prior to release. It’s up to business stakeholders to determine whether some of those requirements
can be unfulfilled (i.e., tested with known failures) when the product is released. Of course,
releasing a product with unfulfilled requirements, even the less important requirements,
compromises the quality of the product. I will return to the issue of measuring adequate quality
towards the end of this section.

Requirements Area # Reqs % Tested % Pass % Fail % Block
Browsing 57 56 49 7 2
Checkout 28 96 96 0 0
Store Management 77 25 20 5 0
Performance 15 100 100 0 0
Reliability 12 0 0 0 8
Security 22 0 0 0 0
Usability/UI 27 100 48 52 0
Table : Requirements Coverage Table
We can use a metrics analysis such as that shown in Table to monitor progress towards these
goals of complete testing and fulfillment of the requirements. Based on such a table, we can
understand the quality of the product and make test and project control decisions to help us
achieve the best possible test coverage and quality outcomes.2
In Table , you see a detailed table that shows where quality stands on the major requirements
groups for an e-commerce site, and the status of each requirement in each group. Note that this is
not showing test counts; the status of each test has been traced back to the associated
requirement. Based on this analysis, the status of each requirement determined as follows:
· If all of the tests associated with the requirement have been run, that requirement is
classified as tested.
· If all of the tests associated with the requirement have been run and have passed, that
requirement is also classified as passed.
2 Attentive readers will notice that this table is a more detailed version of a
requirements coverage table shown in the first article in this series.
· If all of the tests associated with the requirement have been run, but one or more have
failed, that requirement is instead classified as failed.
· If any of the tests associated with a requirement are blocked (e.g., by known defects, test
environment problems, staff shortages, etc.), that requirement is classified as blocked.
Based on these classifications, what can we conclude from this table, and what control actions
might we need to take?
· Browsing has only been partially tested, and a significant percentage (about 10%) of the
requirements has either failed or is blocked from testing. Browsing being a major feature
for e-commerce sites, we can conclude that many problems remain with the quality of our
site. We will need a much high level of confidence in this feature before we release, so
more testing and defect repair is required
· Checkout, another major quality attribute and typically a more complex feature to
implement, is almost completely tested, and all of the requirements tested so far work.
We can be quite confident that this feature will be ready for release soon.
· Store Management, like Browsing and Checkout, is a major feature, because without it
staff cannot put items in the store, offer discounts, manage inventory, and the like. For
this feature, a limited amount of testing has been completed, and one-fifth of the
requirements that have been tested do not work properly. This feature area needs a lot
more attention, both from the testers and from developers.
· Performance is an important non-functional attribute for e-commerce sites, and here we
can feel quite confident. Testing is complete, and there are no problems. We can feel
confident of good performance in the field.
· Reliability is another important non-functional attribute for e-commerce sites, but
unfortunately we have not completed any reliability testing. Most of the reliability
requirements cannot be tested, due to blocking issues. These blocking issues probably
need management attention, as well as individual contributor attention, to resolve.
· Security is likewise an important attribute for e-commerce sites, and again we have cause
for concern here. No requirements have yet been tested. The test manager will need to
have a good explanation for why this is the case.
· Usability and the User Interface, another important non-functional attribute for ecommerce
sites, are likewise in a troubled state. We have tested every requirement, but
over half of them don’t work properly. Obviously, at this time we cannot have any
confidence in that users can understand, operate, or enjoy our site.
Notice that, because this table is reporting on requirements status, we can translate these metrics
directly into the level of confidence we can have in the system. When test counts are reported—
e.g., how many tests cases have been run, passed, and failed—the relationship to the actual
requirements is unclear, and therefore it’s hard to know what level of confidence we should have.
Are these surrogate metrics or direct metrics? Well, both. From a verification perspective—i.e.,
does the system satisfy specified requirements?—they are direct metrics, and very good ones.
From a validation perspective—i.e., will the system satisfy customers, users, and other
stakeholders in actual field conditions when used to solve real-world problems?—they are
indirect metrics. What does this mean?
On the one hand, if the requirements are good, they should reflect the needs and desires of the
stakeholders. Complete fulfillment of these requirements says something important and positive
about the quality of the product. On the other hand, as discussed in the first article in this series,
we are talking ultimately about measuring our level of confidence in the system. I wrote in that
article that confidence is often measured through surrogate metrics of coverage, ideally multidimensional
metrics that include code coverage, design coverage, configuration coverage, test
design technique coverage, requirements coverage, and more.
The metrics in Table 1 are very useful, especially if they highlight problems. In other words, we
can justly and confidently feel quite concerned about the prospects for our product and the
quality of it if the requirements are known to be unfulfilled. However, the level of confidence we
can have in positive measurements of requirements coverage alone is more limited. You cannot
directly predict the satisfaction of customers, users, and other stakeholders based only on
requirements coverage. In practice, it has proven difficult to develop direct, objective metrics
that will predict satisfaction for software and systems. However, with a good set of multidimensional
coverage metrics, we can have a higher level of confidence in eventual satisfaction.
Product Risk Metrics
We’ve considered product metrics that apply for requirements based testing. How about other
testing strategies?
In the case of risk based testing, the objective is typically to reduce product quality risk to an
acceptable level. Here are two questions we might consider:
· How effectively are we reducing quality risk overall?
· For each quality risk category, how effectively are we reducing quality risk?
The first question is the broader perspective, a question that senior managers and executives
might ask to decide whether to classify the product as on track for a successful release or headed
for trouble. The second question is the deeper perspective, a question that the project
management team might ask to determine which risk categories are progressing properly and
which need project control actions. Let’s start with the first question.
When using a risk based test strategy, each quality risk item deemed sufficiently important for
testing should have one or more tests created for it during test design and implementation. Bidirectional
traceability is maintained between the tests and the risk items so that coverage can be
assured and measured. Furthermore, the number of tests created for each risk item is determined
by the level risk associated with that risk item.
During test execution, the tests are run and defects are reported. We need bi-directional
traceability between not only the test results and the risk items, but also between the defects and
the risk items. In this way, we can report which risks are fully mitigated (i.e., all tests are run and
passed), which risks are partially mitigated (i.e., some or all tests are run, and some tests fail or
some defects are known), and which risks are unmitigated (i.e., no failed tests or defects are
known, but tests remain to be run).3
3 For more information on risk based testing, how to perform risk based testing, and
how to report results for risk based testing, see my books Managing the Testing Process,
3rd ed. and Advanced Software Testing: Volume 2.
Figure : Quality Risk Status (Early Test Execution)
Figure shows a way of graphically displaying all this information, across all the risk items. The
region in green represents risks for which all tests were run and passed and no must-fix bugs
were found. The region in red represents risks for which at least one test has failed or at least one
must-fix bug is known. The region in black represents other risks, which have no known mustfix
bugs but still have tests pending to run.
It’s important to mention that the area associated with a given risk is determined by the level of
risk; i.e., the risks are weighted by level of risk. In other words, a risk item with a higher level of
risk will show more pixels in Figure (in whichever region it is classified), while a risk item with
a lower level of risk will show fewer pixels. This property of risk weighting is critical, because it
means that the pie chart gives a true representation of the amount of residual risk, across all the
risk items. Without such risk weighting, the low-level risks would be exaggerated relative to the
high-level risks.
As noted in the caption for Figure , this figure shows the residual level of risk early in the test
execution. What happens when we monitor the progress of quality risk mitigation throughout test
execution?
Figure shows what we would expect during a successful project’s test execution, right around
the late-beginning or middle of that period. The red region—i.e., the risks with known problems
—is growing disproportionately fast as we discover the problem areas of the product. (An
objective of risk based testing is to discover these problem areas as early as possible during test
execution.) The green region is also growing, though, and it will soon start to out-pace the
growth of the red region. Both regions will grow rapidly in the middle of the test execution
period, pushing out the black region which represents unmitigated risks.
Figure : Quality Risk Status (Middle of Test Execution)
Towards the end of a successful project, we see a picture like that shown in Figure . During the
second half of the test execution period, the green region starts to grow very quickly. We have
run all the tests which are likely to find important bugs early in the test execution period (this is a
property of risk based testing), so that few bugs are found in the last half of test execution, and
those bugs which are found are not as important. Testers focus on running confidence-building
tests that are lower risk (turning black pixels into green ones), while developers fix the bugs that
were found (turning red pixels into green ones).
Figure : Quality Risk Status (Late Test Execution)
So, a natural question to ask, looking at Figure , is whether we are done with testing? Must the
pie chart be entirely green? What are realistic goals for risk testing that will ensure adequate
quality and test coverage?
When following a risk based test strategy, the question of adequate test coverage is decided by
the project management team. If the project management team believes that the quality risks
posed by known defects, known test failures, and yet-unrun tests are acceptable, at least as
compared to the schedule and budget risks associated with continuing to test, then we can say
that the risks are in balance and quality risk mitigation has been optimized.
Does this mean product quality is adequate? Maybe. If we have done a good job of identifying
quality risks and assessing their levels, then that reflects the impact (on the customer, user, and
other stakeholders) associated with failures related to each risk item. But also maybe not. Even if
our quality risk analysis is correct, what constitutes an acceptable level of quality risk is
subjective. There’s no guarantee that the project management team’s opinion matches the
opinions of the customers, users, and other stakeholders. However, it’s better to have an
informed opinion on, and to make an educated decision about, product quality before releasing
software. Product metrics allow us to do so.
Now let’s move into the detailed question of quality risk, category by category. This more
detailed view is what project participants typically need. If the higher-level information shown in
Figure shows that project adjustments are needed, this detailed information will allow the
project management team to make decisions about test and project control activities needed to
optimize the level of quality and minimize the residual level of quality risk.
Table shows a tabular presentation of the defects and tests, broken down by risk category. Let’s
assume again that this table reports status for a large and complex e-commerce application
(though not the same project as Table ). Notice how Table ties together important project metrics
for tests and defects with the quality risks. This bridge between the product metric of quality risk
and these two project metrics is what makes this table an effective tool for making test and
project control decisions. Note also that the quality risk categories are sorted by the number of
defects found.

Defects                                                             Tests
Quality Risk Category     #             %            Planned               Executed              %
Performance                  304           27            3,843                   1,512                   39
Security                         234           21            1,032                   432                      42
Functionality                 224           20            4,744                   2,043                   43
Usability                       160            14            498                      318                      64
Interfaces                      93               8             193                      153                      79
Compatibility                71               6            1,787                    939                      53
Other                             21               2            0                           0                           0
Total                         1,107           100            12,857                  5,703                   44
Table : Risk Coverage Table

Let’s analyze what Table is telling us:
· Performance accounts for the largest percentage of defects reported. While we have run a
large number of tests for performance, over 60 percent of the tests remain to be executed.
Therefore, we can expect to discover more defects in this risk category. We need to make
sure that these defects are being fixed promptly, and that any obstacles to running the
remaining tests are removed quickly.

· Security is the second-leading risk category in terms of defects and also has a significant
number of tests remaining to run. As with performance, we need fast defect resolution
and a speedy conclusion to this set of tests.

· The status of Functionality is similar to that of performance and security, but the number
of tests remaining to run is much larger than for those two categories. Depending on
where we are in terms of test progress (i.e., on track or behind schedule), we might need
to consider ways to accelerate the execution of the remaining tests.

· Usability is in better shape than these first three categories. While a significant number of
defects have been found, the testing is mostly complete. Since we are following a risk
based testing strategy, the most important usability tests have already been run. This risk
category is probably proceeding acceptably, with no adjustments required.
· For the Interfaces category, relatively few defects have been found, and most of our
testing is complete. Assuming timely defect repair and no blocked tests for this category,
we are almost done with interface testing.

· For the Compatibility category, the defect count is low, which is reassuring. However, a
significant number of tests remain to be executed. Some investigation of what is needed
to get these tests completed soon is in order.

· Finally, the Other category is for those defects reported during testing (including
especially reactive forms of testing such as exploratory testing) that do not relate to any
specific quality risk item identified during the quality risk analysis. (Because there were
not specific risk items, there are no planned tests for this category.) If the risk analysis is
accurate, then the number of defects in the Other category will be relatively low, as it is
here. If more than 5% of defects are found by such tests, then the test manager should
determine which risk items may have been missed during the quality risk analysis, and
adjust testing accordingly.

I’ve talked about using this table for monitoring and control. What are some realistic goals?
· The distribution of defects should approximately match the expected distribution, as
predicted during the quality risk analysis. If a particular quality risk category contained
few high-likelihood risk items, then it should be at the bottom of the table, and vice versa.
Unexpectedly high or low numbers of defects indicate a problem with the quality risk
analysis.

· As mentioned above, the number of defects found which do not relate to an identified
quality risk item should be low.

· Most if not all of the planned tests should be executed by the end of the project. I say
“most if not all” rather than “all” because some adjustments to the quality risk analysis
might result in the deliberate skipping of some tests in order to add tests in other areas.
If problems are found with the quality risk analysis, then, as part of a test project retrospective,
an investigation of why the quality risk analysis was inaccurate is in order.
You probably noticed that this table looks very much like Table . However, the difference is
visible if you drill down into the detailed data underneath each row. In Table , the details are tests
and their results, and the relationship between these items and the individual requirements
elements. In Table , the details are tests, their results, and defect reports, and the relationship
between these items and the individual quality risk items identified and assessed during the risk
analysis.

Conclusions
Let me close this series of articles by summarizing the key ideas presented in the series.
We have looked at three types of testing metrics: process metrics, project metrics, and product
metrics. Some of testing metrics fit cleanly into one type, while others can span types. The type
of metric you’re looking at can depend on your intended purpose for the metric. You can use
metrics to measure, understand, and manage process, project, and product attributes.
In each of the four articles, I outlined a process for metrics, and illustrated that process with
examples. The process, a variant on Victor Basili’s Goal-Question-Metric process, consists of:
defining the process, product, or project objectives; developing questions about how effectively,
efficiently, or elegantly we achieve those objectives; creating metrics that allow us to answer
those questions; and, setting goals (explicit target measures) for those metrics that indicate
success. When these goals do not indicate success, management awareness of the shortcomings
should trigger adjustments and improvements in the testing, the broader software process, the
project, or the product.

In some cases, the objectives you need to achieve are hard to measure directly. When you find
yourself confronted with such a situation, you can use surrogate metrics to provide indirect
measures, such as using coverage metrics like the ones shown in this article to measure product
quality.

A main theme in this series of articles is that these testing metrics are not only nice to have, they
are truly essential for a complete and accurate understanding of the facts and realities with which
you are dealing. Managing without metrics means managing without facts, managing only with
opinion. While certainly expert opinion is essential for management, opinion is not enough by
itself. Metrics help us recognize which of our opinions are accurate and which are misguided.
While metrics are essential, you should remember that the point is quality, not quantity. You
should use a small number of metrics to accomplish what you need. Too many test managers
make the mistake of subjecting their colleagues to a fire-hose of data, which tends to hide the
information within it. Just because something is easy to measure doesn’t mean it matters, and
just because something is hard to measure doesn’t mean it doesn’t matter.

Another essential element of successful metrics is consistent stakeholder understanding and
support of the objectives, the metrics, and the goals for testing. A successful metrics program is
built in collaboration with testing stakeholders, not as a defensive reaction against them. In
addition, all stakeholders, including the test manager, must commit to the proper use of metrics
information, avoiding misuse such as using process or project metrics to reward or punish
individual contributors who are generally not in control of the primary factors that influence the
results of those metrics.

Can you put a successful test metrics program in place? Yes, you can. As you have seen in this
series of articles, doing so is challenging, but also achievable. Manage with data, and you will be
a better manager. I wish you the best of success in instituting your own testing metrics program.

Automated Regression Testing Challenges in Agile Environment

Abstract

Recently, when I wanted to start my new Automated Testing Project with four resources, I thought of applying any one of the Agile methodologies. But I was not able to continue because, a series of questions were raised inside my mind. The questions are like “Is it possible to use Agile methodologies in Automated Testing?”, “Can I use traditional tools”, “Should I have to go for open-source tools”, “What are the challenges I have to face if I am implementing automation in Agile Environment”. In this article let us analyze some of challenges we face while implementing Automation with Agile methodologies. Automated testing in the Agile environment stands a risk of becoming chaotic, unstructured and uncontrolled.

Agile Projects present their own challenges to the Automation team; Unclear project scope, Multiple iterations, Minimal documentation, early and frequent Automation needs and active stakeholder involvement all demand lot of challenges from the Automation Team.
Some of these challenges are:

Challenge 1: Requirement Phase
Test Automation developer captures requirements in the form of “user stories”, which are brief descriptions of customer-relevant functionality.
Each requirement has to be prioritized as follows:

High: These are mission critical requirements that absolutely have to be done in the first release
Medium: These are requirements that are important but can be worked around until implemented.
Low: These are requirements that are nice-to-have but not critical to the operation of the software.
Once priories are established, the release “iterations” are planned. Normally, each Agile release iteration takes between 1 to 3 months to deliver. Customers/software folks take liberty to make too many changes to the requirements. Sometimes, these changes are so volatile that the iterations are bumped off. These changes are greater challenges in implementing Agile Automation testing process.

Challenge 2: Selecting the Right Tools
Traditional, test-last tools with record-and-playback-features force teams to wait until after the software is done. More over, traditional test automation tools don’t work for an Agile context because they solve traditional problems, and those are different from the challenges facing Agile Automation teams. Automation in the early stages of an agile project is usually very tough, but as the system grows and evolves, some aspects settle and it becomes appropriate to deploy automation. So the choice of testing tools becomes critical for reaping the efficiency and quality benefits of agile.

Challenge 3: Script Development Phase
The Automation testers, developers, business analysts and project stakeholders all contribute to kick-off meetings where “user-stories” are selected to next sprint. Once the “user-stories” are selected for the sprint, they are used as the basis for a set of tests.
As functionality grows with each iteration, regression testing must be performed to ensure that existing functionality has not been impacted by the introduction of new functionality in each iteration cycle. The scale of the regression testing grows with each sprint and ensures that this remains a manageable task the test team use the test automation for the regression suite.

Challenge 4: Resource Management
The Agile approach requires a mixture of testing skills, that is, test resource will be required to define unclear scenarios and test cases, conduct manual testing alongside developers, write automated regression tests and execute the automated regression packages. As the project progresses, specialist skills will also be required to cover further test areas that might include integration and performance testing. There should be an appropriate mix of domain specialist who plan and gather requirements. The challenging part in the Resource management is to find out test resources with multiple skills and
allocate them.

Challenge 5: Communication
Good communication must exist among Automation testing team, developers, business analysts and stake holders. There must be highly collaborative interaction between client and the delivery teams. More client involvement implies more suggestions or changes from the client. It implies more bandwidth for communication. The key challenge is that the process should be able to capture and effectively implement all the changes and data integrity needs to be retained. In traditional testing, developers and testers are like oil and water, but in agile environment, the challenging task is that they both must work together to achieve the target.

Challenge 6: Daily Scrum Meeting
Daily Scrum Meeting is one of the key activities in Agile Process. Teams do meet for 15 minutes stand up sessions. What is the effectiveness of these meetings? How far these meetings help Automation practice Developers?

Challenge 7: Release Phase
The aim of Agile project is to deliver a basic working product as quickly as possible and then to go through a process of continual improvement. This means that there is no single release phase for a product. The challenging part lies in integration testing and acceptance testing of the product.
If we can meet these challenges in a well optimized manner, then Automated Regression Testing in Agile environment is an excellent opportunity for QA to take leadership of the agile processes. It is better placed to bridge the gap between users and developers, understand both what is required, how it can be achieved and how it can be assured prior to deployment. Automation practice should have a vested interest in both the how and the result, as well as continuing to assure that the whole evolving system meets business objectives and is fit for purpose.

Resource :STH

Tips to be More Innovative in the Age of Agile Testing to Survive an Economic Crisis

What is Agile Testing?
“Agile testing involves testing from the customer perspective as early as possible, testing early and often as code becomes available and stable enough from module/unit level testing.” – A wikipedia definition.

Why Need of Innovations in the Age of Agile Testing?
Global Rescission /Economic downtime effect
Current Events are not Current Trends –
When global downturns hit, there is certain inevitability to their impact on information technology and Finance Sectors. Customers become more reluctant in giving software business. Some customers are withdrawing their long term projects and some customers using the opportunities in quoting low price. Many projects that dragged much longer than expected and cost more than planned. So, Companies started to explore how “Agile with different flavors” can help their Enterprises more reliably deliver software quickly and iteratively. The roles and responsibilities of Test Managers/Test Architects become more important in implementing Agile Projects. Innovations are increasingly being fueled by the needs of the testing society at large.

The Challenges in Agile Testing
Agile Testers face lot of challenges when they are working with Agile development team. A tester should be able to apply Root-Cause Analysis when finding severe bugs so that they unlikely to reoccur. While Agile has different flavors, Scrum is one process for implementing Agile. Some of the challenging scrum rules to be followed by every individual are
  •  Obtain Number of Hours Commitment Up Front
  •  Gather Requirements / Estimates Up Front
  •  Entering the actual hours and estimated hours daily.
  •  Daily Builds
  •  Keep the Daily Scrum meetings short
  •  Code Inspections are Paramount
So, in order to meet the above challenges, an agile tester needs to be innovative with the tools that they have. A great idea happens when what you have (tangible and intangible) meets the world’s deepest hunger

How Testers Can be More Innovative in the Age of Agile Testing?
Here are Important Keys to Innovation:

1. Creative
A good Agile Tester needs to be extremely creative when trying to cope up with speed of development/release.  For a tester, being creative is more important than being critical.

2. Talented
He must be highly talented and strives for more learning and innovating new ideas. Talented Testers are never satisfied with what they have achieved and always strives to find unimaginable bugs of high value and priority.

3. Fearless
An Agile Tester should not be afraid to look at a developer’s code and if need be, hopefully in extreme cases, go in and correct it.

4. Visionary
He must have a comprehensive vision, which includes client’s expectations and delivery of the good product.

5. Empowered
He must be empowered to work in Pairs.  He will be involving in Pair Programming to bring shorter scripts, better designs and finding more bugs.

6. Passionate
Passionate Testers always have something unique to contribute that may be in terms of their innovative ideas, the way they carry day-to-day work, their outputs and improve things around them tirelessly.

7. Multiple Disciplines
Agile Tester must have multiple skills like, Manual, Functional, Performance testing skills and soft skills like Leadership skills, Communication skills, EI, etc. so that agile testing will become a cake walk.

“Innovation is the process of turning ideas into manufacturable and marketable form”        -  Watts Humpry

Resource: STH

Tips to Survive and Progress in the Field of Software Testing

These tips not only survive but also advance you in your software testing career. Make sure you follow them:

Tip #1) Written communication – I repeatedly saying this on many occasions that keep all things in written communication. No verbal communication please. This is applicable to all instructions or tasks given to you by your superior. No matter how friendly your lead or manager is but keep things in emails or documents.

Tip #2) Try to automate daily routine tasks – Save time and energy by automating daily routine task no matter how small those tasks are.
E.g. If you deploy daily project builds manually, write a batch script to perform the task in one click.
software testing tips
Tip #3) 360 degree testing approach – To hunt down software defects think from all perspectives. Find all possible information related to the application under test apart from your SRS documents. Use this information to understand the project completely and apply this knowledge while testing.
E.g. If you are testing partner website integration with your application, make sure to understand partner business fully before starting to test.

Tip #4) Continuous learning – Never stop learning. Explore better ways to test application. Learn new automation tools like Selenium, QTP or any performance testing tool. Nowadays performance testing is the hot career destination for software testers! Have this skill under your belt.

Tip #5) Admit mistakes but be confident about whatever tasks you did – Avoid doing the same mistake again. This is the best method to learn and adapt to new things.

Tip #6) Get involved from the beginning – Ask your lead or manager to get you (QAs) involved in design discussions/meetings from the beginning. This is more applicable for small teams without QA lead or manager.

Tip #7) Keep notes on everything – Keep notes of daily new things learned on the project. This could be just simple commands to be executed for certain task to complete or complex testing steps, so that you don’t need to ask same things again and again to fellow testers or developers.

Tip #8) Improve you communication and interpersonal skill – Very important for periodic career growth at all stages.

Tip #9) Make sure you get noticed at work – Sometimes your lead may not present the true picture of you to your manager or company management. In such cases you should continuously watch the moments where you can show your performance to top management.
Warning – Don’t play politics at work if you think your lead or manager is kind enough to communicate your skill/progress to your manager or top management. In that case no need to follow this tip.

Tip #10) Software testing is fun, enjoy it – Stay calm, be focused, follow all processes and enjoy testing. See how interesting software testing is. I must say it’s addictive for some people.

Bonus tip
Read read and read – Keep on reading books, white papers, case studies related to software testing and quality assurance. Always stay on top of the news in software testing and QA industry. Or keep reading this blog to keep yourself updated ;)

Have fun testing! Explore our other interesting posts on software testing tips and tricks. If you like these tips kindly take a moment to share with your friends. And don’t forget to join the conversation below to share your best testing tips!

Resource: STH

Dec 5, 2012

Understanding Quality??

Quality can be a tough concept to get a handle on, especially when you are dealing with a web site.

Many different roles and positions take part In running a typical large commerce web site, and they all have different agendas, so what they mean by “quality” is likely to differ.
Working from a quality assurance outlook, you must be clear on what you understand quality to mean. You may need to declare or defend your views on the quality of your site to people who see quality differently.

The following definitions reflect different ways of looking at what quality might mean. I find the last definition – that quality is meeting requirements – the most useful approach here.
You Know Quality When You See It

You probably know of some things – products, services, etc. – that you feel are excellent, because they meet your needs, do what they are supposed to do, make your life easier, taste great, whatever. These things are identified, by you, as being very positive, and if you stop to think about why they are positive, the word “quality” probably comes to mind. You fire up your brand new computer, and it won’t boot, you’re probably not thinking to yourself, “hmmm, that’s a damned fine piece of machinery I have here.” You inherit your grandfather’s pocketknife that’s fifty years old and still holds a fine edge, you’re probably thinking “that’s quality”.

This view of quality lacks any concrete way of measuring the quality of something, because it is based on a specific person’s judgment. If you build a web site, and you think it is a quality site because you know quality when you see it, you have no assurance that anybody else will see the site the same way. When you build a site for an audience consisting of more than just you, you cannot rely on this definition of quality. People judge quality on many factors; most of these factors are idiosyncratic, and many of those are probably being used without the person ever being aware of it. Furthermore, advertising shapes perceptions and expectations to the extent that people may be taught the wrong signals of quality – these paper towels are decorated with detailed drawings native wildflowers.

This definition doesn’t address issues like the suitability of the site for its purpose; for example, if your site provides information on a topic, doesn’t its ability to perform this duty count for the quality of the site? It also doesn’t address the service to the audience: if you create the site to meet the needs of a particular audience, doesn’t success in meeting these needs affect quality?
Quality is a Function of Brand

Brand names can be a sign of quality, either as a result of personal experience with the brand or from judgments driven by advertising. Most people have faith in some brands or companies: “The best truck is a Ford truck, and the best cookie is an Oreo cookie.” A brand is a great way to set expectations, so for example you can go to any McDonald’s restaurant in the United States and get a Big Mac that’s going to taste just about exactly the same; look for those Golden Arches, and you know with certainty the kind of experience you will receive.

The problem with this definition of quality is that you may have a hard time understanding just what about the brand marks quality. If you don’t love Oreo cookies, you will be hard-pressed to explain just what it is about them that makes Oreos such a good cookie…if it is a good cookie, in your view. And if you disagree with the quality signaled by a specific brand, perhaps because of bad experiences with the product or brand, then that brand sure won’t signal quality to you.
If your web site acquires branded data from a data vendor, you cannot simply rely on the brand’s quality and skip testing the data. Brands may be very important to typical consumers – and so are probably important to your site’s customer – you can never rely on a brand to establish the quality of what you serve to your customers.

Quality is a Passing Grade
Perhaps the most common understanding of quality is that quality corresponds to a passing grade. If you’ve ever bought a piece of clothing that had one of those little paper tags saying that this piece had passed inspector something-or-other, you know that your clothes were tested and received a passing grade. If you’ve every taken home a school report that had a smileyface sticker or maybe some gold stars, you know the utter pride of showing that quality report to your parents.
Your experience with class grades are similar. You spend a semester studying a topic, attending lectures, doing your reading, turning in papers, and taking tests, and your understanding of the topic is evaluated by the teacher. Your teacher assigns you a grade, and this grades is used to help determine whether you pass the class and move ahead.

This definition has several problems, though, when it comes to understanding the meaning of quality. First, you can’t always be certain about what exactly the test – and the passing grade – actually measure. With the class example, the teacher could be measuring your comprehension of the class’s topic OR your ability to master the class itself; haven’t you ever taken a class where somebody who doesn’t understand the material passes because they meet the minimum work requirement?
Second, the teacher is often using his/her subjective views to set the milestones measured by the tests, so the tests are not necessarily objective evaluations.

Third, this view doesn’t help you understand the differences in grades that pass. Say you have two students who pass a class, one of whom gets a grade of “A”, the other gets a “C”. Both pass the class, but one apparently did better than the other. If quality is passing this class, is there any difference between the performance and understanding of the C student and that of the A student?
From a testing point of view, passing a test would seem to imply quality, but you need to know a lot more about the big picture to get any value from a passing grade.

Quality is Perfection
Perfection is good; actually, perfection is better than good, it’s the best. If something is the best, then it must be overflowing with quality, right?
But who decides that something is perfect, and who decides what perfection means? Who or what is it perfect for? The problem with this definition is that it tells you where you want to go, but not how to get there.

Quality is the Absence of Problems
Something doesn’t cause me any problems, and I have no complaints about it, it doesn’t get in the way of work or play, it’s always held up, never breaks, never dies, it’s “Old Reliable” – that’s a sure sign of quality, right?
Saying that quality is the absence of problems doesn’t go far enough, because it doesn’t address the big picture around problems: problems for whom? Take something like a software program – you can expect different people to be different kinds of users, with different needs that involve using the software with different goals in mind, and probably using different tools within the software. Say the software has a function that most people will never need to use, like the old “convert the file to Sanskrit” command; if that function does not work, but most users won’t come across that failure, does the problem nonetheless exist?

Quality is Zero Defect Code
Quality is code that has no bugs – that’s a great goal to have, but it’s usually impractical. Most software, and your website is a kind of software product, must be developed and released quickly to meet marketplace demands. Covering every line of code is time consuming, and at some point the development team will have to decide the benefits of releasing versus continuing to tweak the code.

Quality is Acceptable Performance
You have something that overall does what it’s supposed to do, and failures are within acceptable limits. The majority of students pass the class, the spread of grades looks like it’s supposed to. It may not be perfect, but it’s certainly good enough. Thinking of quality as being acceptable performance is a qualitatively different definition than the previous ones, because you are looking at what something does, rather than what it is. You may know quality when you see it, but that’s a hard thing to share among multiple points of view short of persuading other people to agree to your understanding of quality, and using persuasion to shape others’ perception of quality seems beside the point.
When you look at what something does, you have performance, which is measurable and trackable. Instead of saying that something is good because you like it, or because it’s perfect, or because it doesn’t have any problems, you can say something is good because it does X, Y and Z, and it does them like this.

Quality is Meeting Goals
This definition also describes quality as reflecting what something does. You set goals for something, say a web site, and if the web site meets these goals then it has quality. But is there a scale, so that there is an amount or level of quality that is a function of how much or how well the goals are met? If the goals are just not meet-able – every person who sees our ad will buy the product – then does the site lack quality? Or what if the site performs superbly except when trying to meet one goal, for example, say your new commerce site is immensely popular, garnering millions of new users each day, racking up the “cool site” awards, even growing a large and active community of users, but people just are not buying your products. A big goal, if not the biggest goal, for your site is that it sells product, but the site fails to meet this goal; is this a quality site?

Quality Is Meeting Requirements
If you define quality as meeting requirements, then you have specific indicators of quality. If the requirements are testable, then you can test the success of meeting the requirements. And you can verify the quality repeatedly, by testing at intervals. You can say that whatever you’re testing is good, and that it was good the last time you ran the tests, and the time before that, etc. You can measure quality, and measure it over time.

Requirements can describe the attributes of something, such as the extension and simple characteristics, as well as how it should function, behave, perform, respond, etc. These requirements can be determined in many different ways, but usually the requirements are logically derived from the purpose or function of the subject and from the design intentions: if a television set is designed to receive certain frequency ranges, then clearly part of the television’s measure of quality is its ability to correctly fulfill this requirement.

Individual quality requirements can take different forms:
attributes – the subject is supposed to have measurements x, y and z; it’s supposed to be
does the subject meet the specified measurements? Does the subject have all of its parts? Is the subject the correct color?
functionality – does the subject do what it’s supposed to?
Resource:Startyourtesting.wordpress.in

Dec 4, 2012

Tips to Write Effective Bug Report

Programmers: It is so obvious for most testers that their target audience is developers and yet they appear to pay little attention in understanding what the developers needs are. For those who worked as programmers and then turned to testers might understand what a developer might be looking for when reading a bug report but that doesn’t necessarily mean developers turned testers are likely to report bugs in a way that caters programmers. It’s a question of who has acquired and practices the skill.

A programmer might look for information To reproduce a bug To be able to understand the bug in a different view than reported To perform an investigation of what is causing it. To look for information about the bug and its co-relation to other bugs reported. To understand which part of code to touch to fix the bug.

Co-testers: This isn’t obvious to some testers. Testers could be split across geography or across time. I often come across contexts like, a tester different from the one who reported the bug has to investigate the bug or test for the fix. So, a poorly written bug report could misguide a co-tester.
A co-tester might look for information To reproduce a bug in the same or different environment. To add more investigation notes to the bug. To provide additional information when it is deferred or rejected or even otherwise. To respond to a developers comment on the bug
Test/Dev/Product Manager/Customers: This segment of audience, are mostly decision makers or those who influence the decision makers. These people usually are in a situation where they don’t have luxury in the world to read an entire bug report and then take a decision. Finding & reporting a bug is one part and making it useful to this audience is another.
A test/dev/product manager might look for information To take a decision of adding / not adding a specific module to the upcoming release To take a decision of ship/no ship of the entire product To understand how much more development / bug fixing work is needed. To plan a future release To help the customer plan releases to their customers

Bug report elements & their significance
There are a lot of elements that constitute completeness of a bug report; here we discuss the most important ones and the ones that usually are erred quite frequently by quite a lot of testers.
Summary: The significance of summary is to get an idea of the bug as quickly as possible. For those who conduct bug triage, they might want to learn quickly if they should be picking up a bug or not for a specific meeting. A developer might want to learn if it pertains to the code he has written. A manager might want to know if the bug is in a specific module to which a release is being planned. Writing a concise yet meaningfulbug report is an important skill. How to improve bug summary writing skill? nt lyk dis! It requires work on English and understanding of audience, as a starter.

Description: This shouldn’t be a copy paste of test case (in case you have it). It could contain information on what a tester consciously did and what was observed. To observe more, a tester might want to use focusing and defocusing heuristics while trying to repeat the actions performed that matter to run a specific test. Steps to reproduce aren’t mandatory. I suggest that testers be context driven at least in this context.
For a bug that is likely to be obvious to your audience, you might not want to say, “Step 1: Open the application. Step 2: Click on the menu and then see a Boom!”
Some organizations use bug reporting template that has limited flexibility in terms of the options and fields in the bug tracking system. However, a tester can report things relevant and important to a bug in the Description section. For instance, risk to the user, cost of not fixing the bug, how the bug could impact our business and other items related to bug advocacy can be provided within this section.

Test Setup & Test Environment: We do not have a section called “Test Setup” or “Test Environment” on most bug reporting templates and tracking systems but this is significantly important one, not just for you but for other audiences. Setup information can be provided as an additional note in the Description section. Information on Setup & Environment could involve configuring the system, making hardware level changes, disabling or enabling something relevant to the test or bug being reported. If a tester dumps a lot of unnecessary information in this section, it distracts the audience and makes the report less useful.


Severity: There must be at least half a million pages on the internet discussing about Severity and Priority. We don’t wish to add to it but just like to mention “Be reasonable”. Leave the priority to the business people and decision makers. Explain why something is of high/medium severity when it appears to be a candidate of “not so obvious” for your audience. Be open to learning of what others think about it.

Drafting and Publishing Bug Reports
There are a couple of things a tester might want to consider doing when a bug is found, investigated and ready to start drafting a report.

Looking for Duplicates
Why spend time on writing a bug report that is already written. If a tester finds a colleague or a fellow tester on the project has already reported the bug, then strategy changes from reporting to appending information on the already reported bug. This enhances the importance of the bug that is reported and ensures the management has lesser time dealing with duplicates during triage. This is a way of respecting the time of your audience.

Use of auto spell and grammar check
Spell and Grammar check is extremely important to those testers whose English is of the second language. If you speak, write and read good English, no matter where you are born, English is one of those first languages for you.
Spell and Grammar checks are not fool proof. One of the examples I have encountered in a real time project is that of a report from a tester which reads like this, “X module feature is miss spelled”. You notice misspelled and miss spelled are meaningful words for the software although miss spelled means different thing to us from misspelled. F u use sms lngage n rportin bgs ur hyks wil b shrnkd 2.

Use of screenshots
Screenshots help understand a bug faster. To help the audiences of a bug report, it is a good idea and already widely practiced to add screenshots for bugs that require it. Using JPG format is a wise idea. Some testers also point out what to look for and write short notes in the screenshot itself. When a tester needs a picture of a pencil, it is not wise to go to the moon and capture the whole earth to say, there is a pencil on this place.

Use video for long steps
Some bugs are hard to describe or there could be language barriers between the test and dev teams. Usage of video recording of the bug is getting increasingly popular. The only danger associated with it is that – only few testers know for what bugs a video shall be helpful. It look obvious but it is not. When steps to reproduce are lengthy and hard to explain, usage of a video is wise. If the dev team is from a different country whose English is as bad as that of the test team () then it is a wise idea to have a video of a bug that needs a lot of explanation. While videos could be of help to the developer audience, it could be a pain for others. So, a short description of the problem coupled with a video is the wisest choice we have made so far.

If the program logs actions and interactions of the system then it is likely integrated to be of help in bug fixing times. The log file could be huge and a tester who is context driven would copy paste only the relevant info in a text file and attach the same to the bug report.

Usage of Compression Tools
Screenshots, Videos, Log files and other relevant files to a bug could eat up a lot of space and is a trouble for the audience to download each and every single file attached to a specific bug. Usage of compression tools, such as WinZip or 7Zip can help in such situations. The worst tester would copy a single line of log; make a txt file and then zip the file. Well, it happened

Naming convention for screenshot or video or documents
While naming image file or video file or document file, it is wise to follow a naming convention that goes will with audience. This helps in organizing and tracking these files easily. A worst tester would have a file crash.jpg and a good one would be likely to have something like Bug_id_Product_module_feature_typeofproblem.jpg

Checklist of publishing a bug report
Look for duplicates Check for meaningfulness of entries in all Elements of the bug report. Try modeling as different audience and ask if the bug report is really useful to them. Make a list of mistakes in the past with bug reporting and run it through the report. Check for attachments, their sizes and relevancy to the context Save a copy of your bug report in MS Word or other editors as everyone knows bug tracking systems, crash, too. Finding bugs in the bug report and fix them
“Those testers who write bad bug reports don’t become successful testers. Wouldn’t be completely wrong if we say when you are writing a bug report, you could be writing your own fate. That is why we hope you write it well.”

Resource:Tester Tested

Why good software testers should come out of the well?

One problem that I constantly spot whenever I meet a good tester in India - they don't blog and publicly write or speak. That''s why some people continue to think about me as a good tester.

The recognition that an organization gives could be fine as long as you stay in the organization or as long as the organization is doing fine. That's why many people stick on to an organization for quite sometime because they start hating fresh water. They accept to be in a well. With growing infrastructure needs and changing economy, some wells have to be wiped out and all frogs in it have to hop to a different destination.

I was once surrounded by bad frogs who made me feel that I was a great tester. I would never want to be there again because that hampers my learning although it pleases my ego as long as I am with them. A couple of months back I interviewed a tester who had been awarded as the best among 1100 testers of his organization. He was pathetic and I think the right one in the organization didn't get the award.

A tester from Mumbai who claimed to be superior to me in knowledge and skills wrote to me and said, "You are misguiding the community by giving wrong ideas" and my reply to him is this, "Well, if you are so concerned about the community then you should write a blog and say to the world that Pradeep Soundararajan is giving wrong ideas and the reasons of why his ideas are wrong" for which he never got back to me with his blog link.

There are several testers whom I helped to start a blog and only some of them are doing fine with it. Some people started a blog and sent me a link with a note that - you inspired me. If I revisit their blogs, most of them ended up not continuing it because they realized its hard to keep blogging. The other dimension is - it is easy to blog if you are just doing a cut copy paste plagiarize, not owe credits to original authors and expose yourselves as a fool.

2 months back, I interviewed Harish, who had lost his job from a reputed organization which decided to shut down its operations in Bangalore as they faced the worst part of recession at their US office.

Harish is the kind of tester whom I'd want to work with for the way he challenged my arguments, sharp eyes that observes little things going around the screen and has good reporting skills, good communication skills, but then, my client had to postpone their recruitment plans. I couldn't get an opportunity to work with him. If you are looking for one, I'd suggest you talk to Harish.

I wish I could have linked to his blog to get you curious about him and that's what is missing. I asked him:

AK: You don't blog?
Harish: Why should I?
Ak: For the world to know about a good tester.
Harish: Why should the world know about me?
AK: Consider asking yourselves as to why shouldn't the world know about you?
Harish: Let me think about it.

After a month, Harish calls up, "Hey Anil, I haven't found a job yet. I realize this wouldn't have happened if the world knew about my testing skills. The interviews test something different than my skills"

So here is a post on my blog for all those Harish of the world to wake up and start blogging. A blog of your own serves a core purpose that surrounds all of us - to learn - things, ways, people, testing, ideas, challenges, and more...
Myths that surround wannabe-tester-bloggers
Resource:TesterTested