Oct 30, 2012

Tips for Writing Test Cases

One of the most frequent and major activity of a Software Tester (SQA/SQC person) is to write Test Cases. First of all, kindly keep in mind that all this discussion is about ‘Writing Test Cases’ not about designing/defining/identifying TCs.
There are some important and critical factors related to this major activity.Let us have a bird’s eye view of those factors first.
a. Test Cases are prone to regular revision and update:
We live in a continuously changing world, software are also not immune to changes. Same holds good for requirements and this directly impacts the test cases. Whenever, requirements are altered, TCs need to be updated. Yet, it is not only the change in requirement that may cause revision and update to TCs.
During the execution of TCs, many ideas arise in the mind, many sub-conditions of a single TC cause update and even addition of TCs. Moreover, during regression testing several fixes and/or ripples demand revised or new TCs.
b. Test Cases are prone to distribution among the testers who will execute these:
Of course there is hardly the case that a single tester executes all the TCs. Normally there are several testers who test different modules of a single application. So the TCs are divided among them according to their owned areas of application under test. Some TCs related to integration of application, may be executed by multiple testers while some may be executed only by a single tester.
c. Test Cases are prone to clustering and batching:
It is normal and common that TCs belonging to a single test scenario usually demand their execution in some specific sequence or in the form of group. There may be some TCs pre-requisite of other TCs. Similarly, according to the business logic of AUT, a single TC may contribute in several test conditions and a single test condition may consist of multiple TCs.
d. Test Cases have tendency of inter-dependence:
This is also an interesting and important behavior of TCs that those may be interdependent on each other. In medium to large applications with complex business logic, this tendency is more visible.
The clearest area of any application where this behavior can definitely be observed is the interoperability between different modules of same or even different applications. Simply speaking, wherever the different modules or applications are interdependent, the same behavior is reflected in the TCs.
e. Test Cases are prone to distribution among developers (especially in TC driven development environment):
An important fact about TCs is that these are not only to be utilized by the testers. In normal case, when a bug is under fix by the developers, they are indirectly using the TC to fix the issue. Similarly, where the TCD development is followed, TCs are directly used by the developers to build their logic and cover all scenarios, addressed by TCs, in their code.


So, keeping the above 5 factors in mind, here are some tips to write test cases:
1. Keep it simple but not too simple; make it complex but not too complex:
This statement seems a paradox, but I promise it is not so. Keep all the steps of TCs atomic, precise with correct sequence and with correct mapping to expected results, this is what I mean to make it simple.
Now, making it complex in fact means to make it integrated with the Test Plan and other TCs. Refer to other TCs, relevant artifacts, GUIs etc. where and when required. But do this in balanced way, do not make tester to move to and fro in the pile of documents for completing single test scenario. On the other hand do not even let the tester wish you had documented these TCs in some compact manner. While writing TCs, always remember that you or someone else will have to revise and update these.

2. After documenting Test cases, review once as Tester:
Never think that the job is done once you have written the last TC of the test scenario. Go to the start and review all the TCs once, but not with the mind of TC writer or Testing Planner. Review all TCs with the mind of a tester. Think rationally and try to dry run your TCs. Evaluate that all the Steps you have mentioned are clearly understandable, and expected results are in harmony with those steps.
The test data specified in TCs is feasible not only for actual testers but is according to real time environment too. Ensure that there is no dependency conflict among TCs and also verify that all references to other TCs/artifacts/GUIs are accurate because, testers may be in great trouble otherwise.

3. Bound as well as ease the testers:
Do not leave test data on testers, give them range of inputs especially where calculations are to be performed or application’s behavior is dependent on inputs. You may divide the test item values among them, but never give them liberty to choose the test data items themselves. Because, intentionally or unintentionally, they may use same test data and some important test data may be ignored during the execution of TCs.
Keep the testers eased by organizing TCs according to the testing categories and related areas of application. Clearly instruct and mention which TCs are inter-dependent and/or batched. Similarly, explicitly indicate which TCs are independent and isolated so that tester may manage his overall activity at his or her own will.

4. Be a Contributor:
Never accept the FS or Design Document as it is. Your job is not just to go through the FS and identifying the Test Scenarios. Being a quality related resource, never hesitate to contribute. Suggest to developers too, especially in TC-driven development environment. Suggest the drop-down-lists, calendar controls, selection-list, group radio buttons, more meaningful messages, cautions, prompts, improvements related to usability etc.

5. Never Forget the End User
The most important stakeholder is the ‘End User’ who will actually use the AUT. So, never forget him at any stage of TCs writing. In fact, End User should not be ignored at any stage throughout the SDLC, yet my emphasis so far is just related to my topic. So, during the identification of test scenarios, never overlook those cases which will be mostly used by the user or are business critical even of less frequent use. Imagine yourself as End User, once go through all the TCs and judge the practical value of executing all your documented TCs.

Conclusion:
Test Case Writing is an activity which has solid impact on the whole testing phase. This fact makes the task of documenting TCs, very critical and subtle. So, it should be properly planned first and must be done in well-organized manner. The person who is documenting the TCs must keep in mind that, this activity is not for him or her only, but a whole team including other testers and developers, as well as the customer will be directly and indirectly affected by this work.
So, the due attention must be paid during this activity. “Test Case Document” must be understandable for all of its users, in an unambiguous way and should be easily maintainable. Moreover, TC document must address all important features and should cover all important logical flows of the AUT, with real time and practically acceptable inputs.

What’s your test cases writing strategy? Share your tips with our readers and also put your queries in comments below.
Resource: STH

7 Deadly Sins of Automated Software Testing

Senior management often view automated testing as a silver bullet in reducing testing effort/costs and increasing delivery speed. While it is true that automated tests can provide rapid feedback on health of the system, all approaches to automated testing are not created equal and there are some gotchas that should be avoided.

1. Envy

Flawed comparison between manual testing and automation

Automated tests are not a replacement for manual exploratory testing. A mixture of testing types and levels is needed to achieve the desired quality mitigate the risk associated with defects. This is because testing is not merely a sequence of repeatable actions. The automated testing triangle originally described by Mike Cohn explains the investment profile in tests should focus at the unit level and then reduce up through the application layers.

2. Gluttony

Over indulging on commercial testing tools

Many commercial testing tools provide simple features for automating the capture and replay of manual test cases. While this approach seems sound, it encourages testing through the user-interface and results in inherently brittle and difficult to maintain tests. Additionally, the cost and restrictions that licensed tools place on who can access the test cases is an overhead that tends to prevent collaboration and team work. Furthermore, storing test cases outside the version control system creates unnecessary complexity. As an alternative, open source test tools can usually solve most automated testing problems and the test cases can be easily included in the version control system.

3. Lust

Loving the UI so much that all tests are executed through the UI

Although automated UI tests provide a high level of confidence, they are expensive to build, slow to execute and fragile to maintain. Testing at the lowest possible level is a practice that encourages collaboration between developers and testers, increases the execution speed for tests and reduces the test implementation costs. Automated unit tests should be doing a majority of the test effort followed by integration, functional, system and acceptance tests. UI based tests should only be used when the UI is actually being tested or there is no practical alternative.

4. Pride

Too proud to collaborate when creating tests

Test driven development is an approach to development that is as much a design activity as it is a testing practice. The process of defining test cases (or executable specifications) is an excellent way ensuring that there is a shared understanding between all involved as to the actual requirement being developed and tested. The practice is often associated with unit testing but can be equally applied to other test types including acceptance testing.

5. Sloth

Too lazy to maintain automated tests

The cost and rapid feedback benefits of automated tests are best realised when the tests are regularly executed. This has the effect of highlighting failures and providing continuous feedback about the health of the system. If your automated tests are initiated manually rather than through the CI continuous integration system then there is significant risk that they are not being run regularly and therefore may in fact be failing. Make the effort to ensure automated tests are executed through the CI system.

6. Rage

Frustration with slow, brittle or unreliable tests

Unreliable tests are a major cause for teams ignoring or losing confidence in automated tests. Once confidence is lost the value initially invested in automated tests is dramatically reduced. Fixing failing tests and resolving issues associated with brittle tests should be a priority to eliminate false positives.

7. Avarice (Greed)

Trying to cut costs through automation

Testing tool vendors often try to calculate a Return-on-Investment based purely on labour savings. This analysis is unreliable and under values the importance of testing, the investment required to adopt automation practices and the ongoing maintenance costs.

Summary

Automated testing is not without pitfalls and some of these are identified here. There are of course many other anti-patterns for automated testing that justify further discussion.

Reference:

 Dr Adrian Smith
Agile Coach and
Software Technologist

Manual and automated tests together are challenging?

I have a simple question to ask those of you who have any kind of test automation in your teams:


“Do you have tests that you still run manually, even though
they are also running as part of your automation test sets?

I am talking about the fact that even though you invested the time to automate something you still choose to run it manually.Notice that I am asking for any tests, including the ones you run manually only once in a while or those you give to your junior testers to be 100% sure all is OK; in fact any tests you are still running both automatically and manually at the same time.

Surprisingly enough, even organizations with a relatively mature automation processes still run a significant number of their automatic scenarios as part of their manual tests on a regular basis, even when this doesn’t make any sense (at least on the theoretical level).
After realizing this was the case I sat down with a number of QA Managers (many of them PractiTest users) and asked them about the reason for this seemingly illogical behavior.
They provided a number of interesting reasons and I will go over some of them now:

We only run manually the tests that are really important

The answer that I got the most was that some teams choose tests to be run automatic and manually only when they are “really important or critical”.
This may sound logical at first, but on the other hand when you ask what is their criteria for selecting the tests that should be automated most companies say they select cases based on the number of times they will need to run them and also based on the criticality or importance of the business scenario.  In plain English, they automated the important test cases.

So if you choose to automate the test cases that are important then why do you still run them manually under the same excuse of them been really important…?  Am I the only one confused in here?

We don’t trust our automation 100%

The answer to the question I asked above, of why run the important tests even though they are already automated comes in the form of an even more interesting (and simple) answer: “We don’t really trust our test automation”
So this basically means they are investing 10 or even 50 man-moths of work and in most cases thousands of dollars on software and hardware in order to automate something and then they don’t really trust the results?  Where is the logic in this?
OK, so I’ve worked enough with tools such as QTP and Selenium in order to know that it is not trivial to write good and robust automation, but on the other hand if you are going to invest in automation you might as well do it seriously and write scripts that you can trust.  In the end it is a matter of deciding to invest on the platform and be serious in the work you are doing in order to get results you can trust (and I don’t mean buy expensive tools, selenium will work fine if you have a good infrastructure and write your scripts professionally).
The alternative is really simple, if you have automatic tests you can’t trust because they constantly give you wrong results (either false negative or even worst false positives!) you will eventually stop using them and finally throw all the work and money out the window…

We don’t know what is covered and what is not covered by the automated tests

This is also another big reason for why people waste time running manual tests that are already automated, they are simply not aware of which scenarios are included on their automation suite and which aren’t. In this situation they decide, based on their best judgement, to assume that “nothing is automated” and so run their manual test cases as if there was no automation.
If this is the case, then why do these companies have automation teams in the first place?

The automated tests are the responsibility of another team

Now for the interesting question, how come a test team “doesn’t know” which scenarios are automated and which aren’t?  The most common answer is that the tests are been written by a completely different team, a team of automation engineers, that is completely separate from the one running the manual tests.
Having 2 test teams, one manual and one automatic, is not something bad and in many cases it will be the best approach to achieve effective and trustworthy automation.  The bad thing is that these teams can sometimes be completely disconnected and so work on the same project without communicating and cooperating as it should be.

I will talk about how to communicate and cooperate in a future post, but the point here is that when you have 2 teams (one automated and one manual) you need to make an extra effort to make sure both teams are coordinated and as a minimum each of them know what the other is doing in order to plan accordingly.

We want to have all the results in a single place to give good reports

Finally, I wanted to mention a reason that was brought up by a number of test managers, even though they brought it as a difficulty and not a show stopper but it was brought up many times and so it sounded interesting enough to mention it.  The fact that they needed to provide a unified testing report for their project, and for this they either run part of their tests manually, or created manual tests to reflect the results of their automation.
Again, this looks like a simple and “relatively cheap” way of coordinating the process and even producing a unified report, but it suffers from the problem of being a repetitive manual job that needs to be done even after you already have an automation infrastructure and it will eventually but surely (specially as more and more automation is added) run into issues of coordination and maintenance that will make more expensive and in some cases will render it misleading or even obsolete.

What’s your take?

I am actively looking for more issues, experiences or comments like the ones above that revolve around the challenges in manual and automated testing.  Do you have stuff you want to share?  Please add it as comments or mail me directly to joel-at-practitest-com.
We’ve been working on a solution for these types of issues and so we are looking for all the inputs we can get in order to make sure it will provide an answer to as many of the existing challenges as possible.  I will be grateful for any help you can provide!

Oct 16, 2012

Selenium RC: How to Upload and Submit Files Using Selenium and AutoIt

Many web applications contain HTML forms which interact with the user. The user is able to enter data in to the form fields and submit the data to a server. Selenium is able to handle form inputs such as entering text in text fields, selecting radio buttons, selecting a value from a drop-down menu and so on. Things get a little tricky when dealing with input form fields of type “file”, to allow users to browse for a file and attach the file to the HTML form and submit the file.

So, what is the problem?
I can’t seem to use Selenium Core to upload a file; when I try to type in the file upload text field, nothing happens!

There seems to be two inter-related problems:

1 – Unfortunately, this is yet another JavaScript security restriction; JS is not allowed to modify the value of input type=”file” form fields. You can work around this by running your tests under Selenium IDE or under Selenium RC running in the experimental “*chrome” mode for Firefox, but at present there is no straight forward way to do this in Selenium Core.

2 – Handling of the “Choose File” dialog box with Selenium alone is not possible. We need to have another program running to select the path and file from the “Choose File” dialog box.

So, How can we upload files?
Fortunately, there exists a workaround to the above problems. This is where the magic of AutoIt and Selenium combination can work wonders!

First we will write a simple script in AutoIt and save the file as an executable: (please read documentation on AutoIt website to learn how to save the scripts as an executable file.1 WinWaitActive("Choose file")
2 Send("C:\attach\samplefile.txt") \\location of the file you want to attach to the form and submit
3 Send("{ENTER}")


We shall name the above file as attachFile.exe

Now, within a Java code, we can run a process which will execute the above program just before when we want to upload and submit a file.01 package com.company;
02
03 import java.io.IOException;
04 import com.thoughtworks.selenium.Selenium;
05
06 public class AddAttachment {
07 public static void attach(Selenium selenium, String fileName) {
08 try {
09 String[] commands = new String[]{};
10 commands = new String[]{"c:\\test\\attachFile.exe"}; //location of the autoit executable
11 Runtime.getRuntime().exec(commands);
12 } catch (IOException e) {}
13
14 //autoit executable is now waiting for a "Choose file" dialog to popup
15 selenium.click("name=browseButton");
16 //once the "Choose file" dialog is opened, the autoit will input the path and file name
17 }
18 }


The above seems to be the easiest way to deal with file uploads and attachments using Selenium.

Requirements and Specifications

The Importance of Requirements and Specifications

Behind any concerted effort to build, launch, or maintain a web site is probably an idea or concept of what the site’s leadership or company executives want done. Behind any rational web effort should be a formal structure and methodology known as a project plan. Project planning is a technique now common to information technology and media work (I mention project plans and planning only in passing here — this topic deserves a deeper treatment that is beyond the scope of this particular essay).

Most web site projects include a body of information that describes the product or output of the project’s work effort; this information deals with the objectives of the final product, defined in the project requirements, and any rules for creating the product, defined in the project specifications.
Requirements Define Necessary Objectives

Any coherent and reasonable project must have requirements that define what the project is ultimately supposed to do. According to Kaner et al. in Testing Computer Software


A requirement is an objective that must be met. Planners cast most requirements in functional terms, leaving design and implementation details to the developers. They may specify price, performance, and reliability objectives in fine detail, along with some aspects of the user interface. Sometimes, they describe their objectives more precisely than realistically (p 32).

There are actually several kinds of requirements; the term requirement is awkward because it describes the concept of an objective or goal or necessary characteristic, but at the same time the term also describes a kind of formal documentation, namely the requirements document. Putting aside the particular document for now, requirements are instructions describing what functions the software is supposed to provide, what characteristics the software is supposed to have, and what goals the software is supposed to meet or to enable users to meet.

I prefer to use the term requirements to refer to the general set of documents that describe what a project is supposed to accomplish and how the project is supposed to be created and implemented. Such a general set of requirements would include documents spelling out the various requirements for the project — the “what” — as well as specifications documents spelling out the rules for creating and developing the project — the “how”.

Project requirements provide an obvious tool for evaluating the quality of a project, because a final review should examine whether each requirement has been met. Unfortunately, it’s never quite that easy. Requirements tend to change through the course of a project, with the result that the product as delivered may not adhere to the available requirements — this is a constant and annoying facet to the quality assurance process. Moreover, meeting all of the requirements doesn’t ensure a quality product, per se, since the requirements may not have been defined with an eye towards the quality of the end-user’s experience. A project’s specifications are more useful for determining the product’s quality.
Specifications Define How to Meet The Objectives

A specification is literally the discussion of a specific point or issue; it’s hard in this instance to avoid the circular reference. A project’s specifications consist of the body of information that should guide the project developers, engineers, and designers through the work of creating the software.

A specification document describes how something is supposed to be done. This document may be very detailed, defining the minutia of the implementation; for example, a specifications document may list out all of the possible error states for a certain form, along with all of the error messages that should be displayed to the user. The specifications may describe the steps of any functional interaction, and the order in which they should be followed by the user. A requirements document, on the other hand, would state that the software must handle error states reasonably and effectively, and provide explicit feedback to the users. The specifications show how to meet this requirement.

Specifications may take several forms. They can be a straightforward listing of functional attributes, they can be diagrams or schematics of functional relationships or flow logic, or they can occupy some middle ground. Specifications can also be in the form of prototypes, mockups, and models.

Project specifications are much more important for determining the quality of the product. Every rule and functional relationship provides a test point. Adherence to specification is not a perfect measure, however. Again according to Kaner et al.,


A mismatch between the program and its specification is an error in the program if and only if the specification exists and is correct. A program that follows a terrible specification perfectly is terrible, not perfect (p 60).

A critical part of the quality assurance role is proactive involvement during the project requirements analysis and specification phases, where the rational and customer-centered point of view of the QA analyst can be applied to the project’s rules before any code is written. The return on investment (ROI) of this up-front QA involvement has been shown to pay off: several studies have determined (and common sense supports) that companies will have to pay less to fix problems that are found early in any project cycle. Catching problems when the requirements and specifications are being hammered out is the ideal time to head off problems.

Kaner et al. list 6 test points to be covered when reviewing requirements and specifications, described here briefly:

Are these the “right” requirements?
Are they complete?
Are they compatible?
Are they achievable?
Are they reasonable?
Are they testable?
Examples of Requirements and Specifications Documentation

The following list describes the various kinds formal documents that belong to the body of requirements and specifications document. These are not all mandatory for each and every software project, but they do all provide important information to the developers, designers and engineers tasked with implementing a project and to the quality assurance people and testers responsible for evaluating the implementation of the project. These topics may also be combined as sections of larger and inclusive requirements and specifications documents.
User Requirements

User requirements typically describe the needs, goals, and tasks of the user. I say “typically” here because often these user requirements don’t reflect the actual person who will be using the software; projects are often tailored to the needs of the project requestor, and not the end-user of the software. I strongly recommend that any user requirements document define and describe the end-user, and that any measurements of quality or success be taken with respect to that end-user.

User requirements are usually defined after the completion of task analysis, the examination of the tasks and goals of the end-user.
System Requirements

The term system requirements has two meanings. First, it can refer to the requirements that describe the capabilities of the system with which, through which, and on which the product will function. For example, the web site may need to run on a dual processor box, and may need to have the latest brandX database software.

Second, it can refer to the requirements that describe the product itself, with the meaning that the product is a system. This second meaning is used by the authors of Constructing Superior Software (part of the Software Quality Institute Series):


There are two categories of system requirements. Functional requirements specify what the system must do. User requirements specify the acceptable level of user performance and satisfaction with the system (p 64).

For this second meaning, I prefer to use the more general term “requirements and specifications” over the more opaque “system requirements”.
Functional Requirements

Functional requirements describe what the software or web site is supposed to do by defining functions and high-level logic.

In many cases, if the user requirements are written for the requestor and not the end-user, the functional requirements are combined with the functional requirements; this is common within companies that have a strong Information Technology department that is tasked with doing the work.
Functional Specifications

Functional specifications describe the necessary functions at the level of units and components; these specifications are typically used to build the system exclusive of the user interface.

With respect to a web site, a unit is the design for a specific page or category of page, and the functional specification would detail the functional elements of that page or page type. For example, the design for the page may require the following functions: email submission form, search form, context-sensitive navigation elements, logic to drop and/or read a client-side cookie, etc. These aren’t “look” issues so much as they are “functionality” issues. A component is a set of page states or closely related forms of a page. For example, a component might include a page that has a submission form, the acknowledgement page (i.e., “thanks for submitting”), and the various error states (i.e., “you must include your email address”, “you must fill in all required fields”, etc.).

The functional specifications document might have implications about the design of the user interface, but these implications are typically superceded by a formal design specification and/or prototype.
Design Specifications

The design specifications address the “look and feel” of the interface, with rules for the display of global and particular elements.
Flow or Logic Diagram

Flow diagrams define the end-user’s paths throng the site and site functionality. A flow diagram for a commerce site would detail the sequence of pages necessary to gather the information required by the commerce application in order to complete an order.

Logic diagrams describe the order that logic decisions are made during the transmission, gathering, or testing of data. So for example, upon submission of a form, information may be reviewed by the system for field completeness before being reviewed for algorithmic accuracy; in other words, the system may verify that required fields have in fact been completed before verifying that the format of the email address is correct or the credit card number is an algorithmically valid number. Another example would be the logic applied to a search query, detailing the steps involved in the query cleanup and expansion, and the application of Boolean operators.
System Architecture Diagram

A system architecture diagram illustrates the way the system hardware and software must be configured, and the way the database tables should be defined and laid out.
Prototypes and Mock-ups

A prototype is a model of the system delivered in the medium of the system. For example, a web site prototype would be delivered as a web site, using the standard web protocols, so that it could be interacted with in the same medium as the project’s product. Prototypes don’t have to be fully functioning, they merely have to be illustrative of what the product should look and feel like. In contrast, a mock-up is a representation in a different medium. A web site mock-up might be a paper representation of what the pages should look like.

The authors of Constructing Superior Software describe several categories of prototypes: low fidelity prototypes which correspond to what I’ve labeled “mock-ups”, and high fidelity prototypes.


Low fidelity prototypes are limited function and limited interaction prototypes. They are constructed to depict concepts, design alternatives, and screen layouts rather than to model the user interaction with the system….There are two forms of low fidelity prototype: abstract and concrete….The visual designer works from the abstract prototype and produces drawings of the interface as a concrete low fidelity prototype….High fidelity prototypes are fully interactive (p 70-71).

Prototypes and mock-ups are important tools for defining the visual design, but they can be problematic from a quality assurance and testing point of view because they are a representation of a designer’s idea of what the product should look and feel like. The issue is not that the designer’s may design incorrectly, but that the prototype or mock-up will become the de facto design by virtue of being a representation. The danger is that the design will become final before it has been approved; this is known as “premature concretization” or “premature crispness of representation”, where a sample becomes the final design without a formal decision. If you have every tried to get page element removed from a design, you have an idea what this problem is like. The value of prototypes is that they provide a visual dimension to the written requirements and specifications; they are both a proof of concept and the designers’ sketchpad wrapped up in one package.
Technical Specifications

Technical specifications are typically written the by developers and coders, and describe how they will implement the project. The developers work from the functional specifications, and translate the functions into their actual coding practices and methodologies.

Building a Test Suite

A web site is designed and built for an audience, and much attention should be paid to the process of understanding that audience. Any testing of a web site should proceed from one obvious point: the people on the web site teams – whether they are designers, programmers, marketers, or testers – are not typical users.

Just as these folks aren’t typical users, any computer they use for their daily tasks is unlikely to represent the computers used by true end-users. Realistic testing of a web site requires test environments that match – as closely as reasonably possible – computers used by the site’s audience(s).
The Need for a Test Suite

A test suite is a set of machines configured as platforms for testing a web site; these machines should represent the client-side environments of the majority of the site’s audience. A test lab on the other hand would be a specially equipped and designed facility or space used for testing, specifically usability testing. Test labs allow for the monitoring of users acting as test subjects. Of course, with that distinction made, any computers set aside for testing are going to be used as an ad hoc test lab.

Testing a web site requires an understanding of what and how to test. You must know what you are going to test: what functionality, what logic, what scenario, what user conditions, etc. You must also know how you are going to test: which tests, which tools, what methodology. And just as important, you must know where you are going to run your tests: what machines, what configurations, what connectivity, etc. This last point is what the test suite addresses.

Any computer that is not dedicated solely to testing — like a designer’s Mac or a tester’s personal desktop — is going to be compromised as a test environment, unless your site’s targeted audience includes people exactly like you. This is a difficult path to follow, believing that a personal desktop is adequate for testing, because this path takes you away from what you can qualitatively show to be your audience’s typical system and configuration.
The Goals for a Test Suite

A test suite can yield several benefits to the production and testing process, stated here as a set of goals:
Provide “clean” environments for testing platform/browser compatibility for pages in development and in production, allowing a more objective view of what standard configurations would encounter.
Provide an increase in productivity by providing a means to rapidly test and review prototypes on all common browsers and platforms.
Provide environments for testing network connectivity and performance to the production servers over the open net (as opposed to testing over a “back door” LAN connection). This would duplicate the connections experienced by end-users.
Provide a “lab” for usability testing. This assumes that the test suite will be located within a space that allows for most of the machines to be in use at the same time, and in a way that allows for some level of observations of the users.
Designing the Test Suite

A well-designed test suite should address the following points:
The browsers most likely to be used by your audience. This information comes from research on commonly available browsers combined with analysis of your site’s logs.
The platforms most likely to be used by your audience. This information comes from research on commonly available platforms combined with analysis of your site’s logs.
The ways in which different browsers and platforms interact. Some browsers and versions of browsers don’t play well together; for example, different levels of Microsoft’s Internet Explorer won’t install correctly on the same Windows 95 environment. Browsers also render HTML differently and handle plug-ins differently depending on the operating system. Design a test suite that correctly handles these behaviors and interactions.
The relative importance of certain user profiles. Some sets of users may have platform/browsers needs that are especially important to your site’s business goals, so it makes sense to prioritize the accommodation of these users.
The budget for testing. Less money available to spend on test machines translates to less specialization of the test environments. It is possible to use a few machines with multiple, bootable operating system environments, but managing these environments requires time and attention resources. In addition, the more complex the maintenance of a test machine, the harder it is to create realistic usability test environments.
Basic Test Suite For a Mainstream Commerce Site

Most commerce sites will have approximately the same requirements for test suite environments, mostly because the range of client-side environments is dominated by a rather small set of machine types, operating systems, and browsers. As soon as a site starts drawing increased proportions of its user base from people using palm-top browsers as platforms (such as Palm Pilots), WebTVs, or browsing-enabled kitchen appliances, then this sample suite would need to be revised.

As the test suite machines get older, and as newer configurations and generations of hardware and software become commonly available, these boxes should shift downward and newer boxes should be added.

These machine configurations were chosen as being the best general match for a particular set of users; the identification of the set is based on a range of characteristics, extending from ubiquity of the machine to the sociological tendency for adopting new technology.

It is certainly possible to create dual-boot machines, allowing for multiple operating systems on one box and requiring less space for these machines. Using one box for more than one operating system means that in the event that any one machine dies, multiple test environments are lost. The added convenience of a space or hardware cost savings must be balanced against any potential chance of decreased stability on a test machine; in my experience, the test phase of any project often becomes fixed, so there is no time to repair a downed machine during critical test phases. Along these lines, every machine should have a CD-ROM drive and a SCSI card for facilitating restores from a CD or from some other SCSI peripheral.
1. Archaic PC

This PC reflects a 16-bit environment. As a test environment, this machine will only be useful for a few more years, as Windows 3.1 is steadily dropping in usage; I would expect this box to be re-born as a Linux box pretty soon.
Intel PC, can be 386 or 486 (486 would remain useful longer, but 386 is probably cheaper)
Windows 3.1 OS
Internet Explorer 16 bit engine
AOL 16 bit engine
Netscape Navigator 16 bit engine
Lynx 16 bit engine
Opera 16 bit engine
28K modem
analog line
14″ monitor (or, this box could share a monitor with the “Older Mainstream PC” by means of a switchbox)
2. Older Mainstream PC
Intel PC, can be a 486 or P166 Pentium
Windows 95 OS
Internet Explorer 3.* engine
AOL 3.*
Netscape Navigator 3.04
Lynx
Opera
56K modem
analog line
sound card + speakers
15″ monitor
3. Early Adopter Platform

The “Early Adopter Platform” and the “Corporate Desktop” environments can be based on the same hardware; the more machines based on the same model computer, the easier the maintenance tasks.
Intel PC Pentium II, at least 64 Megs RAM
Windows 98
Internet Explorer 4.* engine
AOL 4.*
Netscape Navigator 4.*
Opera
special use browsers (screenreader, TBD)
56k modem
analog line
sound card + speakers
17″ monitor
4. Corporate Desktop Platform

The “Early Adopter Platform” and the “Corporate Desktop” environments can be based on the same hardware; the more machines based on the same model computer, the easier the maintenance tasks.
Intel PC Pentium II
Windows NT 4.* Workstation
Internet Explorer 4.* engine
Netscape Navigator 4.*
56k modem
analog line
17″ monitor
5. Mainstream Mac Platform
Mac PowerPC 2-3 years old
Internet Explorer 3.* engine
AOL 3.*
Netscape Navigator 3.*
56k modem
analog line
sound card + speakers
6. New Mac User
iMac
Internet Explorer 4.* engine
AOL 4.*
Netscape Navigator 4.*
56K modem (standard)
analog line
sound card + speakers (standard)
7. New UNIX User

I’m still working out the specification for this environment, but it is basically supposed to represent the new converts to UNIX, those people who load Linux onto a PC.
Intel PC
Internet Explorer 4.* engine
AOL 4.*
Netscape Navigator 4.*
56K modem (standard)
analog line
8. Experienced UNIX User

In contrast to the new UNIX user, the experienced user is running some flavor of UNIX on a non-PC box, say a SunSparc or an SGI workstation. I’m still trying to figure this one out…
9. WebTV
television
cable connection (analog line?)
WebTV receiver

Testing Without a Formal Test Plan

A formal test plan is a document that provides and records important information about a test project, for example:
project and quality assumptions
project background information
resources
schedule & timeline
entry and exit criteria
test milestones
tests to be performed
use cases and/or test cases

For a range of reasons — both good and bad — many software and web development projects don’t budget enough time for complete and comprehensive testing. A quality test team must be able to test a product or system quickly and constructively in order to provide some value to the project. This essay describes how to test a web site or application in the absence of a detailed test plan and facing short or unreasonable deadlines.
Identify High-Level Functions First

High-level functions are those functions that are most important to the central purpose(s) of the site or application. A test plan would typically provide a breakdown of an application’s functional groups as defined by the developers; for example, the functional groups of a commerce web site might be defined as shopping cart application, address book, registration/user information, order submission, search, and online customer service chat. If this site’s purpose is to sell goods online, then you have a quick-and-dirty prioritization of:
shopping cart
registration/user information
order submission
address book
search
online customer service chat

I’ve prioritized these functions according to their significance to a user’s ability to complete a transaction. I’ve ignored some of the lower-level functions for now, such as the modify shopping cart quantity and edit saved address functions because they are a little less important than the higher-level functions from a test point-of-view at the beginning of testing.

Your opinion of the prioritization may disagree with mine, but the point here is that time is critical and in the absence of defined priorities in a test plan, you must test something now. You will make mistakes, and you will find yourself making changes once testing has started, but you need to determine your test direction as soon as possible.
Test Functions Before Display

Any web site should be tested for cross-browser and cross-platform compatibility — this is a primary rule of web site quality assurance. However, wait on the compatibility testing until after the site can be verified to just plain work. Test the site’s functionality using a browser/OS/platform that is expected to work correctly — use what the designers and coders use to review their work.

By running through the site or application first with known-good client configurations allows testers to focus on the way the site functions, and allows testers to focus on the more important class of functional defects and problems early in the test project. Spend time up front identifying and reporting those functional-level defects and the developers will have more time to effectively fix and iteratively deliver new code levels to QA.

If your test team will not be able to exhaustively test a site or application — and the premise of this essay is that your time is extremely short and you are testing without a formal plan — you must first identify whether the damned thing can work, and then move on from there.
Concentrate on Ideal Path Actions First

Ideal paths are those actions and steps most likely to be performed by users. For example, on a typical commerce site, a user is likely to
identify an item of interest
add that item to the shopping cart
buy it online with a credit card
ship it to himself/herself

Now, this describes what the user would want to do, but many sites require a few more functions, so the user must go through some more steps, for example:
login to an existing registration account (if one exists)
register as a user if no account exists
provide billing & bill-to address information
provide ship-to address information
provide shipping & shipping method information
provide payment information
agree or disagree to receiving site emails and newsletters

Most sites offer (or force) an even wider range of actions on the user:
change product quantity in the shopping cart
remove product from shopping cart
edit user information (or ship-to information or bill-to information)
save default information (like default shipping preferences or credit card information)

All of these actions and steps may be important to some users some of the time (and some developers and marketers all of the time), but the majority of users will not use every function every time. Focus on the ideal path and identify those factors most likely to be used in a majority of user interactions.

Assume a user who knows what s/he wants to do, and so is not going to choose the wrong action for the task they want to complete. Assume the user won’t make common data entry and interface control errors. Assume the user will accept any default form selections — this means that if a checkbox is checked, the user will leave it checked; if a radio button is selected to a meaningful selection, the user will let that ride. This doesn’t mean that non-values that are defaulted — such as the drop-down menu that shows a “select one” value — will left as-is to force errors. The point here is to keep it simple and lowest-common denominator and not force errors. Test as though everything is right in the world, life is beautiful, and your project manager is Candide.

Once the ideal paths have been tested, focus on secondary paths involving the lower-level functions or actions and steps that are less frequent but still reasonable variations.

Forcing errors comes later, if you have time.
Concentrate on Intrinsic Factors First

Intrinsic factors are those factors or characteristics that are part of the system or product being tested. An intrinsic factor is an internal factor. So, for a typical commerce site, the HTML page code that the browser uses to display the shopping cart pages is intrinsic to the site: change the page code and the site itself is changed. The code logic called by a submit button is intrinsic to the site.

Extrinsic factors are external to the site or application. Your crappy computer with only 8 megs of RAM is extrinsic to the site, so your home computer can crash without affecting the commerce site, and adding more memory to your computer doesn’t mean a whit to the commerce site or its functioning.

Given a severe shortage of test time, focus first on factors intrinsic to the site:
does the site work?
do the functions work? (again with the functionality, because it is so basic)
do the links work?
are the files present and accounted for?
are the graphics MIME types correct? (I used to think that this couldn’t be screwed up)

Once the intrinsic factors are squared away, then start on the extrinsic points:
cross-browser and cross-platform compatibility
clients with cookies disabled
clients with javascript disabled
monitor resolution
browser sizing
connection speed differences

The point here is that with myriad possible client configurations and user-defined environmental factors to think about, think first about those that relate to the product or application itself. When you run out of time, better to know that the system works rather than that all monitor resolutions safely render the main pages.
Boundary Test From Reasonable to Extreme

You can’t just verify that an application works correctly if all input and all actions have been correct. People do make mistakes, so you must test error handling and error states. The systematic testing of error handling is called boundary testing (actually, boundary testing describes much more, but this is enough for this discussion).

During your pedal-to-the-floor, no-test-plan testing project, boundary testing refers to the testing of forms and data inputs, starting from known good values, and progressing through reasonable but invalid inputs all the way to known extreme and invalid values.

The logic for boundary testing forms is straightforward: start with known good and valid values because if the system chokes on that, it’s not ready for testing. Move through expected bad values because if those fail, the system isn’t ready for testing. Try reasonable and predictable mistakes because users are likely to make such mistakes — we all screw up on forms eventually. Then start hammering on the form logic with extreme errors and crazy inputs in order to catch problems that might affect the site’s functioning.
Good Values

Enter in data formatted as the interface requires. Include all required fields. Use valid and current information (what “valid and current” means will depend on the test system, so some systems will have a set of data points that are valid for the context of that test system). Do not try to cause errors.
Expected Bad Values

Some invalid data entries are intrinsic to the interface and concept domain. For example, any credit card information form will expect expired credit card dates — and should trap for them. Every form that specifies some fields as required should trap for those fields being left blank. Every form that has drop-down menus that default to an instruction (“select one”, etc.) should trap for that instruction. What about punctuation in name fields?
Reasonable and Predictable Mistakes

People will make some mistakes based on the design of the form, the implementation of the interface, or the interface’s interpretation of the relevant concept domain(s). For example, people will inadvertently enter in trailing or leading spaces into form fields. People might enter a first and middle name into a first name form field (“Mary Jane”).

Not a mistake, per se, but how does the form field handle case? Is the information case-sensitive? Or does the address form handle a PO address? Does the address form handle a business name?
Extreme Errors and Crazy Inputs

And finally, given time, try to kill the form by entering in extreme crap. Test the maximum size of inputs, test long strings of garbage, put numbers in text fields and text in numeric fields.

Everyone’s favorite: enter in HTML code. Put your name in BLINK tags, enter in an IMG tag for a graphic from a competitor’s site.

Enter in characters that have special meaning in a particular OS (I once crashed a server by using characters this way in a form field).

But remember, even if you kill the site with an extreme data input, the priority is handling errors that are more likely to occur. Use your time wisely and proceed from most likely to less likely.
Compatibility Test From Good to Bad

Once you get to cross-browser and cross-platform compatibility testing, follow the same philosophy of starting with the most important (as defined by prevalence among expected user base) or most common based on prior experience and working towards the less common and less important.

Do not make the assumption that because a site was designed for a previous version of a browser, OS, or platform it will also work on newer releases. Instead, make a list of the browsers and operating systems in order of popularity on the Internet in general, and then move those that are of special importance to your site (or your marketers and/or executives) to the top of the list.

The most important few configurations should be used for functional testing, then start looking for deviations in performance or behavior as you work down the list. When you run out of time, you want to have completed the more important configurations. You can always test those configurations that attract .01 percent of your user base after you launch.
The Drawbacks of This Testing Approach

Many projects are not mature and are not rational (at least from the point-of-view of the quality assurance team), and so the test team must scramble to test as effectively as possibly within a very short time frame. I’ve spelled out how to test quickly without a structured test plan, and this method is much better than chaos and somewhat better than letting the developers tell you what and how to test.

This approach has definite quality implications:
Incomplete functional coverage — this is no way to exercise all of the software’s functions comprehensively.
No risk management — this is no way to measure overall risk issues regarding code coverage and quality metrics. Effective quality assurance measures quality over time and starting from a known base of evaluation.
Too little emphasis on user tasks — because testers will focus on ideal paths instead of real paths. With no time to prepare, ideal paths are defined according to best guesses or developer feedback rather than by careful consideration of how users will understand the system or how users understand real-world analogues to the application tasks. With no time to prepare, testers will be using a very restricted set input data, rather than using real data (from user activity logs, from logical scenarios, from careful consideration of the concept domain).
Difficulty reproducing — because testers are making up the tests as they go along, reproducing the specific errors found can be difficult, but also reproducing the tests performed will be tough. This will cause problems when trying to measure quality over successive code cycles.
Project management may believe that this approach to testing is good enough — because you can do some good testing by following this process, management may assume that full and structured testing, along with careful test preparation and test results analysis, isn’t necessary. That misapprehension is a very bad sign for the continued quality of any product or web site.
Inefficient over the long term — quality assurance involves a range of tasks and foci. Effective quality assurance programs expand their base of documentation on the product and on the testing process over time, increasing the coverage and granularity of tests over time. Great testing requires good test setup and preparation, but success with the kind testplan-less approach described in this essay may reinforce bad project and test methodologies. A continued pattern of quick-and-dirty testing like this is a sign that the product or application is unsustainable in the long run.

Agile Test Plan – Do We Really Need One?

Test Planning is an important activity of a testing process and one that requires careful thoughts and decisions from not just the test manager (who is usually responsible for creating the test plan) but all members of the testing team and product development manager. Some people believe that it is the most important part of the testing process (I personally think test designing and abstract thinking is the most important) and spend many hours and effort coming up with a great test plan. Text books dedicate a whole section related to test planning, how to write one and what to include in a test plan while some governing bodies and regulatory organizations such as FDA require a comprehensive test plan in order to approve a product.

In the real world, in a waterfall environment, quite often the test plan document is one that is hardly ever looked at during the lifecycle of the product. The activity of “Test Planning and Monitoring” should be an ongoing activity during the project lifecycle, it should be updated as per changes to the project, but in most cases this is not the case; test plan is either not updated or changes are retrospective, making the test plan document the least valuable by-product.

Whilst test planning is almost always considered as a must-have product in a waterfall project, do we really need one in an agile project? i.e. does it really add any value to what the whole team is trying to achieve?

The agile manifesto clearly favours working software over comprehensive documentation and responding to change over following a plan.

In an agile environment, the contents of a release (the items) are discussed before the sprint so the testing team know in advance what is the scope and what should be tested.

In the “planning poker game” the estimates are discussed through so the testing team know how long it will take to test a feature (this is inclusive of environment setup, scenarios, automation, exploratory, performance, etc).

In “story writing session” where details of each feature is thought through, the test team are already beginning to write scenarios to cover the many ways stories can be tested – this is the most valuable activity of the team.

During the sprint, QA are continuously testing new code/feature. Test planning becomes a dynamic activity as the priorities for the day change. Testing is based on what is the activity for the day and the outcome of the day before.

It is clearly evident that test plan doesn’t reveal defects but test scenarios will. The effort needs to be shifted on creating better scenarios than creating a test plan. What is really needed is a short test strategy outlining the processes applicable across sprints, i.e. sections about Sprint Planning, Specifications Workshops, Manual QA, Automation, Test Coverage, Test Reporting, Test Environments, Staging, etc… These are processes, activities applicable to every sprint but of course derived by the company’s vision.

So, with all this in mind, is the Test Plan document or extensive Test Strategies really a thing of the past? Do we really need them for Agile Projects?

--Amit,QA Excellence