Dec 4, 2013

10 best practices for an effective testing & QA implementation


After years of polishing and fine-tuning a division-wide testing & QA effort, we thought it would be a good idea to share the top ten lessons learned for what we now consider as the key to a successful outcome. We hope you find this short list useful as a source of validation or ideas.

1) Process: It is critical that the organization defines a process that is robust and certified by experts in order to initiate the software assurance quality culture. The process will serve as a guideline that may evolve over time. Most importantly, it should be made official and should be followed through. Improvements will be made until a mature process is established.

2) Managerial Commitment: Managerial commitment should stem from the CIO to ensure alignment from each of the development managers, as well as from the development areas of each country. Everyone must be aware of the value that is added by testing & QA to the business. The process, therefore, must account for the value of the solutions that it offers to the organization.

3) Personal Experience: Hiring someone as a tester that lacks necessary experience is a common mistake. It is vital to acknowledge that the position requires experience in both the business and in software development in general.

4) Deliverables: As part of the software development and testing processes, it is necessary to define deliverables, such as requirements, a testing plan, and testing cases. These will guarantee that testers can effectively follow-up throughout the project from the software quality perspective.

5) Tool Usage: Both the use of tools for tracking and managing defects, as well as the creation of test cases and execution, are essential for increasing the maturity of the testing & QA process. The process may begin without tools, but they are a requisite for increasing execution maturity.

6) Metrics: Developing and creating metrics to track the software quality in its current state, as well as to compare the improvement with previous versions, will help increase the value and maturity of the testing process (e.g. the number of components with errors in the software/the total number of components in the software; or the number of errors detected in the testing phase/total number of errors detected).

7) Testing Environment: Implementation of appropriate testing environments that allow developers to reproduce the system execution in production environments is crucial to the creation and execution of the corresponding test cases.

8) Test Data: The testing environment required for day-to-day operation should provide or ensure availability of the necessary data to enable the corresponding test execution. Even if you have developed the appropriate testing environments, developers need to access specific data required to execute the associated test cases.

9) Change Management: Like any other production environment, the testing environment should properly track changes in configuration, ensuring not only controlled results, but that the tests are run in environments that closely resemble those of the real production environment.

10) Developer Awareness: It is critical to have an awareness process that includes management commitment at each and every business unit and for associated developers. The goal is to demonstrate that testing activities add value to their daily work.

Nov 25, 2013

Stop Doing Too Much Automation

When researching my article on testing careers for the Testing Planet a thought stuck me about the amount of people who indicated that ‘Test’ Automation was one of the main learning goals of many of the respondents.  This made me think a little about how our craft appears to be going down a path that automation is the magic bullet that can be used to resolve all the issue we have in testing.
I have had the idea to write this article floating around in my head for a while now and the final push was when I saw the article by Alan Page (Tooth of the Angry Weasel) - Last Word on the A Word  in which he said much of what I was thinking. So how can I expand on what I feel is a great article by Alan?
The part of the article that I found the most interesting was the following:

“..In fact, one of the touted benefits of automation is repeatability – but no user executes the same tasks over and over the exact same way, so writing a bunch of automated tasks to do the same is often silly.”

This is similar to what I want to write about in this article.  I see time and time again dashboards and metrics being shared around stating that by running this automated ‘test’ 1 million times we have saved a tester running it manually 1 million times and therefore if the ‘test’ took 1 hour and 1 minute to run manually and 1 minute to run automated it means we have saved 1 million hours of testing.  This is so tempting and to a business who speaks in value and in this context this mean costs.  Saving 1 million hours of testing by automating is a significant amount of cost saving and this is the kind of thing that business likes to see a tangible measure that shows ROI (Return on Investment) for doing ‘test’ automation.  Worryingly this is how some companies sell their ‘test’ automation tools.

If we step back for a minute and go back and read the statement by Alan.  The thing that most people who state we should automate all testing talk about the repeatability factor.  Now let us really think about this.  When you run a test script manually you do more than what is written down in the script.  You think both critically and creatively , you observe things far from the beaten track of where the script was telling you to go.  Computer see in assertions, true or false, black or white, 0 or 1, they cannot see what they are not told to see.  Even with the advances in artificial intelligence it is very difficult for automation systems to ‘check’ more than they have been told to do.  To really test and test well you need a human being with the ability to think and observe.   Going back to our million times example.  If we ran the same test a million times on a piece of code that has not changed the chances of find NEW issues or problems remains very slim however running this manually with a different person running it each time our chances of finding issues or problems increases.  I am aware our costs also increase and there is a point of diminishing returns.  James Lyndsay has talked about this on his blog in which he discusses the importance of diversity   The article also has a very clever visual aid to demonstrate why diversity is important and as a side effect it helps to highlight the points of diminishing return. This is the area that the business needs to focus on rather than how many times you have run a test.

My other concern point is the use of metrics in automation to indicate how many of your tests have you or could you automate.  How many of you have been asked this question?  The problem I see with this is what people mean by “How many of your tests?”  What is this question based upon?  Is it...
  • all the tests that you know about now?
  • all possible tests you could run?
  • all tests you plan to run?
  • all your priority one tests?
The issue is that this is a number that will constantly change as you explore and test the system and learn more. Therefore if you start reporting it as a metric especially as a percentage it soon becomes a non-valuable measure which costs more to collect and collate than any benefit it may try to imply.  I like to use the follow example as an extreme view.

Manager:  Can you provide me a % of all the possible tests that you could run for system X that you could automate.
Me:  Are you sure you mean all possible tests?
Manager: Yes
Me: Ok, easy it is 0%
Manager:  ?????

Most people are aware that testing can have an infinite amount of tests even for the most simple of systems so any number divided by infinity will be close to zero, therefore the answer that was provided in the example scenario above.  Others could argue that we only care about of what we have planned to do how much of that can be automated, or only the high priority stuff and that is OK, to a point, but be careful about measuring this percentage since it can and will vary up or down and this can cause confusion.  As we test we find new stuff and as we find new stuff our number of things to test increases.

My final worry with ‘test’ automation is the amount of ‘test’ automation we are doing (hence the title of this article) I seen cases where people automate for the sake of automation since that is what they have been told to do.  This links in with the previous statement about measuring tests that can be automated.   There need to be some intelligence when deciding what to automate and more importantly what not to automate. The problem is when we are being measured by the number of ‘tests’ that we can automate human nature will start to act in a way that makes us look good against what we are being measured. There are major problems with this and people stop thinking about what would be the best automation solution and concentrate on trying to automate as much as they can regardless of costs. 

What ! You did not realise that automation has a cost?  One of the common problems I see when people sell ‘test’ automation is they conveniently or otherwise forget to include the hidden cost of automation.  We always see the figures of the amount of testing time (and money) being saved by running this set of ‘tests’ each time.  What does not get reported and very rarely measured is the amount of time maintaining and analysing the rests from ‘test’ automation.  This is important since this is time that a tester could be doing some testing and finding new information rather than confirming our existing expectations.  This appears to be missing whenever I hear people talking of ‘test’ automation in a positive way.  What I see is a race to automate all that can be automated regardless of the cost to maintain. 

If you are looking at implementing test automation you seriously need to think about what the purpose of the automation is.  I would suggest you do ‘just enough’ automation to give you confidence that it appear to work in the way your customer expects.  This level of automation then frees up your testers to do some actual testing or creating automation tools that can aid testing.  You need to stop doing too much automation and look at ways you can make your ‘test’ automation effective and efficient without it being a bloated, cumbersome, hard to maintain monstrosity (Does that describe some peoples current automation system?)  Also automation is mainly code so should be treated the same as code and be regularly reviewed and re-factored to reduce duplication and waste.

I am not against automation at all and in my daily job I encourage and support people to use automation to help them to do excellent testing. I feel it plays a vital role as a tool to SUPPORT testing it should NOT be sold on the premise that that it can replace testing or thinking testers.

Some observant readers may wonder why I write ‘test’ in this way when mentioning ‘test’ automation.  My reasons for this can be found in the article by James Bach on testing vs. checking refined.
 
Resource: Stevo

How to measure and analyze the testing efficiency?

Measurements or Metrics or Stats are the common terms you would hear in every management meeting. What exactly does that mean? A bunch of numbers? Hyped up colorful graphs? Why should I bother about those? Well. Right from pre-KG education to K12 to college, we are so used to something called "marks". These are a set of numbers that reflect the academic capabilities of students (note - marks are not the real indicators of IQ levels). These marks in turn help academic institutions to select candidates, along with other parameters. In the same way, if one wants to measure the extent of testing, what are the numbers one must look at?

Fundamental rule. Do not complicate the numbers. The more you complicate, the more will be the confusion. We need to start with some basic numbers that reflect speed of testing, coverage of testing, efficiency of testing. If all these indicators move up, we can definitely be confident that the testing efficiency is getting better and better.

Every number collected from projects, help to fine tune the processes in that project and across the company itself. 

Test planning rate (TPR). TPR = Total number of test cases planned / total person-hours spent on planning. This number indicates how fast the testing team thinks, articulates the tests and documents the tests.

Test execution rate (TER). TER = Total number of test cases executed / total person-hours spent on execution. This indicates the speed of testers in executing the same. 

Requirements coverage (RC). Ideal goal is 100% coverage. But it is very tough to say how many test cases will cover 100% of requirements. But there is a simple range you mus assume. If we test each requirement in just 2 different ways - 1 positive and 1 negative, we need 2N number of test cases, where N is the number of distinct requirements. On an average, most of the commercial app requirements can be done with 8N test cases. So, the chances of achieving 100% coverage is high if you try to test every requirement in 8 different ways. Not all requirements may need an eight-way approach.

Planning Miss (PM).  PM = Number of adhoc test cases that are framed at the time of execution / Number of test cases planned before execution. This indicates, whether the testers are able to plan the tests based on the documentation and understanding levels. This number must be as less as possible, but it is very difficult to achieve zero level in this.

Bug Dispute Rate (BDR). BDR = Number of bugs rejected by development team / Number of total bugs posted by testing team. A high number here leads to unwanted arguments between the two teams.

There is a set of metrics that reflect the efficiency of the development team, based on the bugs found by the testing team. Those metrics do not really reflect the efficiency of the testing team; but without testing team, those metrics cannot be calculated. Here are a few of those.

Bug Fix Rate (BFR). BFR = Total number of hours spent on fixing bugs / total number of bugs fixed by dev team. This indicates the speed of developers  in fixing the bugs.

Number of re-opened bugs. This absolute number is an indicator of how many potential bad-fixes or regression effects are injected into the application, by the development team. Ideal goal is zero for this.

Bug Bounce Chart (BBC). BBC is not just a number, but a line chart. On the X axis, we need to plot the build numbers in sequence. Y axis contains how many New+ReOpen bugs are found in each build. Ideally this graph must keep dropping towards zero, as quickly as possible. But if we see a swinging pattern, like sinusoidal wave, it indicates, new bugs are getting injected build over build, due to regression effects. After code-freeze, product companies must keep a keen watch on this chart.
 
Resource: QAmonitor

Performance Testing - Basics

Squeeze the app before release. If the app withstands that, it is fit for release. But how to squeeze? How will we determine the number of users, data volume etc.? Let us take this step by step and learn. Performance tests are usually postponed until customers feel the pinch. The primary reason is the cost of the tools and the capability to use the tools. If one wants to earn millions of dollars thru a hosted app, a good, proven and simple way is to increase users and reduce price. If one does this, the business volume will grow - but it brings the performance issues as well along with that.

Most of the tools use the same concept of emulating the requests from the client side of the application. This has to be done programmatically. When one is able to generate requests, processing response is a relatively easier task. When you choose the tools, it is better you look for the must-be-in features and then for nice-if features. 

The must-be-in features are listed below.

  1. Select protocols (HTTP, FTP, SMTP etc.)
  2. Record the user sequence and generate script
  3. Parameterize the script to supply a variety of data
  4. Process dynamic data sent by server side (correlation)
  5. Configure user count, iterations and pacing between iterations
  6. Configure user ramp-up
  7. Process secondary requests
  8. Configure network speed, browser types
  9. Check for specific patterns in the response
  10. Execute multiple scripts in parallel
  11. Measure hits, throughput, response time for every page 
  12. Log important details and server response data for troubleshooting
  13. Provide custom coding facility to add additional logic
The nice-if features are listed below.
  1. Configure performance counters for OS, webserver, app server, database server. This way, you can get all results under one single tool
  2. Automatically correlate standard dynamic texts based on java or .net framework. This will reduce scripting time
  3. Provide a visual UI to script and build logic
  4. Generate data as needed - sequential, random and unique data
  5. Provide a flexible licensing model - permanent as well as pay-per-use will be great
  6. Integrate withe profiling tools to pinpoint issues at code level
When one evaluates a performance testing tool, one must do a simple proof of concept on the above features, to see how effectively the tool handles these features. No need to say, that the tool must be simple to use.


Here are a few simple terms you need to be clear - at least academically. There are so many different definitions for the phrases given below, but we try to take the mostly accepted definitions from various project groups.

Load Testing - Test the app for an expected number of users. Usually customers know their current user base (example - total number of account holders in a bank). The number of online users will be usually between 5% and 10% of customer base. But an online user may be just a logged in user, doing no transaction with server. Our interest is always on the concurrent users. Concurrent users are usually between 5% and 10% of online users. So if 100,000 is the total customer base, then 10% of it, 10,000 will be online users and 10% of that, 1000 will be concurrent users.

Stress Testing - Overload the system by x%. That x% may be 10% more than normal load or even 300% more than the normal load. But usually load tests happen for a longer duration and stress tests happen for a shorter duration as spikes, with abnormally more users. Stress is like a flash flood. 

Scalability/Capacity Testing -  See the level at which the system crashes. Keep increasing users and you will see a lot of failures and eventually crash. Some companies use the term stress testing itself to include the capacity testing as well.

Volume Testing - keep increasing the data size for requests as well as process requests when the application database has 100s of millions of records. This usually checks the robustness and speed of the data retrieval and processing.

Endurance/Availability Tests - test the system for a very long period of time. Let the users keep sending requests 24 by 7 may be even for a week or month. See if system consistently behaves over a period of time.

Resource: softsmith

Sep 27, 2013

The Advantages of Unit Testing Early


Nowadays, when I talk with (read: rant at) anyone about why they should do test driven development or write unit tests, my spiel has gotten extremely similar and redundant to the point that I don't have to think about it anymore. But even when I do pairing with skeptics, even as I cajole and coax testable code or some specific refactorings out of them, I wonder, why is it that I have to convince you of the worth of testing ? Shouldn't it be obvious ?

And sadly, it isn't. Not to many people. To many people, I come advocating the rise of the devil itself. To others, it is this redundant, totally useless thing that is covered by the manual testers anyway. The general opinion seems to be, "I'm a software engineer. It is my job to write software. Nowhere in the job description does it say that I have to write these unit tests." Well, to be fair, I haven't heard that too many times, but they might as well be thinking it, given their investment in writing unit tests. And last time I checked, an engineer's role is to deliver a working software. How do you even prove that your software works without having some unit tests to back you up ? Do you pull it up and go through it step by step, and start cursing when it breaks ? Because without unit tests, the odds are that it will.

But writing unit tests as you develop isn't just to prove that your code works (though that is a great portion of it). There are so many more benefits to writing unit tests. Lets talk in depth about a few of these below.

Instantaneous Gratification

The biggest and most obvious reason for writing unit tests (either as you go along, or before you even write code) is instantaneous gratification. When I write code (write, not spike. That is a whole different ball game that I won't get into now), I love to know that it works and does what it should do. If you are writing a smaller component of a bigger app (especially one that isn't complete yet), how are you even supposed to know if what you just painstakingly wrote even works or not ? Even the best engineers make mistakes.

Whereas with unit tests, I can write my code. Then just hit my shortcut keys to run my tests, and voila, within a second or two, I have the results, telling me that everything passed (in the ideal case) or what failed and at which line, so I know exactly what I need to work on. It just gives you a safety net to fall back on, so you don't have to remember all the ways it is supposed to work in. Something tells you if it is or not.

Also, doing Test Driven Development when developing is one of the best ways to keep track of what you are working on. I have times when I am churning out code and tests, one after the other, before I need to take a break. The concept of TDD is that I write a failing test, and then I write just enough code to pass that test. So when I take a break, I make it a point to leave at a failing test, so that when I come back, I can jump right back into writing the code to get it to pass. I don't have to spend 15 - 20 minutes reading through the code to figure out where I left off. My asserts usually tell me exactly what I need to do.

Imposing Modularity / Reusability

The very first rule of reusable code is that you have to be able to instantiate an instance of the class before you can use it. And guess what ? With unit tests, you almost always have to instantiate an instance of the class under test. Therefore, writing a unit test is always a first great step in making code reusable. And the minute you start writing unit tests, most likely, you will start running into the common pain points of not having injectable dependencies (Unless of course, you are one of the converts, in which case, good for you!).

Which brings me to the next point. Once you start having to jump through fiery hoops to set up your class just right to test it, you will start to realize when a class is getting bloated, or when a certain component belongs in its own class. For instance, why test the House when what you really want to test is the Kitchen it contains. So if the Kitchen class was initially part of the House, when you start writing unit tests, it becomes obvious enough that it belongs separately. Before long, you have modular classes which are small and self contained and can be tested independently without effort. And it definitely helps keep the code base cleaner and more comprehensible.

Refactoring Safety Net

Any project, no matter what you do, usually ends up at a juncture where the requirements change on you. And you are left with the option of refactoring your codebase to add / change it, or rewrite from scratch. One, never rewrite from scratch, always refactor. Its always faster when you refactor, no matter what you may think. Two, what do you do when you have to refactor and you don't have unit tests ? How do you know you haven't horribly broken something in that refactor ? Granted, IDE's such as Eclipse and IntelliJ have made refactoring much more convenient, but adding new functionality or editing existing features is never simple.

More often than not, we end up changing some undocumented way the existing code behaved, and blow up 10 different things (it takes skill to blow up more, believe me, I have tried). And its often something as simple as changing the way a variable is set or unset. In those cases, having unittests (remember those things you were supposed to have written?) to confirm that your refactoring broke nothing is godsend. I can't tell you the amount of times I have had to refactor a legacy code base without this safety net. The only way to ensure I did it correct was to write these large integration tests (because again, no unit tests usually tends to increase the coupling and reduce modularity, even in the most well designed code bases) which verified things at a higher level and pray fervently that I broke nothing. Then I would spend a few minutes bringing up the app everytime, and clicking on random things to make sure nothing blew up. A complete waste of my time when I could have known the same thing by just running my unit tests.

Documentation

Finally, one of my favorite advantages to doing TDD or writing unit tests as I code. I have a short memory for code I have written. I could look back at the code I wrote two days ago, and have no clue what I was thinking. In those cases, all I have to do is go look at the test for a particular method, and that almost always will tell me what that method takes in as parameters, and what all it should be doing. A well constructed set of tests tell you about valid and invalid inputs, state that it should modify and output that it may return.

Now this is useful for people like me with short memory spans. But it is also useful, say, when you have a new person joining the team. We had this cushion the last time someone joined our team for a short period of time, and when we asked him to add a particular check to a method, we just pointed him to the tests for that method, which basically told him what the method does. He was able to understand the requirements, and go ahead and add the check with minimal hand holding. And the tests give a safety net so he doesn't break anything else while he was at it.

Also useful is the fact that later, when someone comes marching through your door, demanding you fix this bug, you can always make sure whether it was a a bug (in which case, you are obviously missing a test case) or if it was a feature that they have now changed the requirements on (in which case you already have a test which proves it was your intent to do it, and thus not a bug).

Source:Google testing

Sep 4, 2013

Team Building in Software Testing – How to Build and Grow QA Team



 
Like in any other software development life cycle, Testing too requires some important factors to develop and maintain for continuous process improvement. One such factor is Team Building. While building a right team, focus should be on the following key elements:

Roles and Responsibilities
It is very important for the team members to understand what they are supposed to do. This was quite often not communicated or discussed with the team. Before start of a project, the team members must be explained on the typical tasks which they will be performing on a daily basis for their respective roles. Be it a tester or a test lead, setting the expectations and explaining what is expected out of them will give correct results without unnecessary delays or errors.

Following points need to be clarified to the team:
             Scope of the Project
             Roles and Responsibilities expected from everyone
             Key points to focus like Deliverable s, Timelines etc.
             Explain about the Strategy and Plan
And above all, the team members have the primary responsibility to keep in mind their career aspirations, growth, learning etc. which will be the key motivators to perform in their current roles to excel.

Knowledge Transfer
It is very vital for the Testers to understand the Domain as well as the functions of the application to be able to thoroughly test the application under test. KT sessions are very essential to make them understand the core functions and logics, which will be used to apply during testing. Brainstorming sessions are vital to share common understanding of application and domain.
Discussion should involve testers right from the project initial discussions which essentially consists of Business people, Architects, Developers, Database experts etc.. Involving testers during these early stages of software development will provide good knowledge and understanding about the application that is going to be developed and tested.

Domain Knowledge
Understanding the application’s Domain (e.g. Healthcare, Insurance etc) is very important and will be helpful for Testers to verify the functionality with a different perspective, wearing the hat of the end customer as well as a SME. It takes time and only over the period of working in a particular domain, the resource will be able to familiarize on the domain he is working. Sometimes, a tester will get a chance to test different applications belonging to the same domain, so testing becomes easier and meaningful if he has knowledge on the overall domain.

Technical and Domain Certifications
Having a talented pool of testers is definitely a big asset for the project. The focus should be to train the team and get them certified in the respective areas they work by nominating for internal certifications. There are also a host of external certifications which can also be selected and get the team trained.
Certifications will definitely give the team a moral support and maturity to perform testing with confidence. Domain certified resources will also leverage the intellectual knowledge gain which can be showcased to prospective clients for new business opportunities.


Career Ladder
It’s not enough to create just a team of testers with all skill set, but to provide opportunities for them to come up their career ladder is also significantly important. Create or nominate to programs to shape them eligible for their next level of role will obviously fulfil the needs of identifying resources when required. Team meetings can be effectively utilized to emphasize their roles and responsibilities in their next level. Educating them the various skills required to perform in their next roles is a very good advantage and also a continuous process improvement. Every Manager has the responsibility to explain about the duties that are expected to be performed when the resources are getting promoted. This will make sure that not just a set of resources are promoted, but a ready-to-work responsible and skilled individuals are.

Team Dynamics and Group Outing
It’s quite obvious to ensure there is a level of team dynamics established and followed by the team for effective group work, meeting common goals, finishing panned targets and achieving on time. Making them understand that “Project” is the common objective for all in the project and completing what the customer wants is “Priority”. To accomplish this, everyone should work together as a “Team” leaving all differences behind and completing the planned tasks is the only “Target”. During weekly team meetings, the team members should receive the information on Tasks, Priorities for the next period and have common understanding on the work to be performed, clear and loud.

Team building exercises and outings are really necessary to burn out the stress and for a good recharge. This will also help for a better understanding outside the project works and in a different environment altogether. Small token of appreciation can be announced during team meetings to identify talents and to encourage and motivate others to perform.

Source :STH

Aug 27, 2013

The Bipolar Life of a Software Tester




Oh!  I’ve been testing all day and I haven't found a single problem. 

No, wait…

This is good, right? Clean software is the goal.  Alright, cool, we rock!  Looks like we’re deploying to prod tomorrow morning…just one more test…Dammit! I just found a problem!  I hate finding problems at the final hour. Oh chee!!

No, wait…

This is good, right?  Better to have caught it in QA today than in prod tomorrow.  That’s what they pay me for.  Hey, here’s another bug!  And another!  I rock.  I just found the mother load of bugs.  This is awesome!!!

No, wait…

This is bad, right?  We’re either going to have to work late or delay tomorrow’s prod release.  I totally should have caught these problems earlier; it would have been so much cheaper. Oh God.
What’s that?  The product owners are rejecting my bugs? Really?  How humiliating.  I hate when my bugs get rejected!

No, wait…

This is good, right? It’s great that my bugs got rejected. Now I don’t have to retest everything.
No, wait…I want to retest everything.

No, wait…maybe I don’t.

Ahhhhhhh!

Aug 6, 2013

Writing a Good Business Case-The First Step towards Test Automation


While initiating an automated software-testing program in the organization, why intelligent software testing managers begin their effort by writing a business case?

Reason being the top management can not decide to make long term investments in test automation solely by seeing the rosy picture of few benefits of the test automation.
Depending upon the nature of the project, test automation would certainly call for high initial investments, whereas their returns may not be visible even in 3-4 years to come. Hence to visualize the ROI (Return on Investment) out of such investments, any management would like to firstly see and evaluate a properly projected business case.
The management & other stakeholders of the company are always interested in knowing the justification to initiate the automation drive in terms of three major deciding factors like;

1) ROI (Return on Investment)
2) Potential benefits &
2) Expected risks
ROI consideration in a Business Case: In a business case we compare the cost of the test automation solution with the benefits the solution is going to bring us.
The costs associated with selection, implementation, and maintenance of the software testing tool is quite significant. It generally includes expenses incurred on:
1) Selection of tool
2) Procurement of tool (use open source, buy or develop internally)
3) Licenses
4) Customization
5) Implementation
6) Training of personnel
7) Tool usage
8) Maintenance of automated testware
9) Tool maintenance
Some of these expenses are measured directly in terms of money; while others come from time spent by the team members; both must be considered in the calculation.

On the other side of the business case equation, we have the benefits. Benefits from test automation are rarely measurable in terms of actual money. They come from savings we obtain because the tool helps us perform the tasks faster and with fewer mistakes.

For test execution tools the cost/benefit depends largely on how often the automated tests will be executed, as shown in the following graph.



Tests which are executed a few number of times only during the entire lifetime of the product are usually not worth spending automation resources on. On the other hand, it may be well worth automating the tests that are executed many times, for example, tests used for extensive regression testing of areas of high-risk. Of course it is more practical to have a mix of manual and automated tests.




Ref.: Factors for ROI calculation-from Automated Testing Institute
Major Factors for Inclusion in Business Case Template:

1) Outline of a Development Roadmap: This could possibly cover the following aspects
# What could be the life of the product?
# How frequent the changes could be?
# How many releases could it have?
# What could be the possible release schedule?
# What could be the scope for the regression testing?
# What could be the criticality of its maintenance?
2) Requirements to be tested: An outline of broad requirements and the time plan for testing, will help the software testing engineer to understand the extent of test coverage which could be planned for automation.
3) Projection of the Extent of Manual Testing:Once test automation is introduced, it is obvious that manual testing effort shall come down drastically. It is important to know the extent of manual testing that will still remain in conjunction with automation.
4) Outline of Technology Roadmap: Technological roadmap can possibly include the deployment of open source tools if any, different scripting languages, various browsers, UI objects and configurations to be covered under software testing effort.
5) Training of Personnel:
Description of the skills of the software testing personnel available as on date and the training support that will come from the automation tool supplier is one the key element of the business case.
Business case Conclusions:1) For large size projects, a well planned schedule for test automation all across the organization can possibly deliver tangible benefits in terms of reduced cycle time for testing, quicker time-to-market & increased test coverage. This aspect must be adequately projected in quantified terms in the business case.
2) Automation is not worth its time, money & efforts put on a small project. However ROI is the best metrics to judge its potential in real terms & must be an essential part of the business case. 

Resource: STG

Jul 12, 2013

Does Automating Your Manual Tests Give You Good Automated Tests?

Question ?


I notice that the tag wiki for the “automated testing” tag contains the following sentence: “Commonly, test automation involves automating a manual process already in place that uses a formalized testing process.”
This might be common, but is it good?
What pitfalls are there with directly automating manual test scripts, especially where those writing the automation have no knowledge of the system or domain? Is a test originally designed to be run manually going to be a good design for automation?

Answers

Answer by expert tester


My considered answer to this is a firm ‘No’ and I’d go further, in that I am of the opinion that blindly automating manual tests may be an automated testing anti-pattern.
Manual test scripts are executed by testers who are able to make decisions, judgements and evaluations. As a result, manual tests are often able to cover large areas of the system under test hitting a large number of test conditions. A manual test can easily become a large sprawling, description covering many areas of the application and this can be a very useful manual test. However, this would not be an advisable design for an automated test.
Automated tests that attempt to cover every point that a manual test covers tend to be brittle. They have a tendency to break more often and also annoyingly, an automated test will often stop completely when it hits a failure or an error meaning that later steps do not get run. This can mean that, in order to complete a testing run, some minor problem with a larger script can need to be resolved. In my experience, it’s far easier to have these assertions within separate tests, so that other tests can be run independently. Over time I have found that, large automated tests that attempt to literally replicate manual test scripts, become a considerable maintenance headache. Particularly, as they tend to frequently fall over when you really want to run them. The frustration can lead to major disillusionment with the automated test effort itself.
Blindly attempting to automate entire manual tests could also block gaining the maximum benefit from the test automation effort. A manual test in itself may not be easily automatable – but individual parts of it may be and by automating those, the manual test script could be reduced in size.
Manual test scripts tend to be most efficient when hitting as many areas of the application in the shortest amount of time possible. In contrast, automated test scripts tend to be more efficient when hitting as few areas of the application as needed.
It could be argued that one of the reasons why automation often ‘involves automating a manual process already in place that uses a formalized testing process’ is because automation is often introduced for the first time onto existing projects that already have manual tests, this may inevitably lead to the temptation to simply automate existing manual scripts. I would ask myself though, if the test had been automated from the outset, would I have designed the test the same was as I would a manual test? – I feel that my honest answer is ‘No’.
So although existing manual tests clearly can be useful documentation for automated testing in that they can show what is currently being tested and how, they should not dictate the design of the automated tests themselves. In my opinion, good manual test designs explain to a human how to test the features of the application most efficiently; good automated test designs make it easy for machine logic to exercise important features quickly and in my opinion, these are very different engineering problems.


Not necessarily. Here are some of the reasons why automating manual tests might not be advisable:
  • Certain tests may be easier to execute manually because they are very quick to execute manually but would take a disproportionate amount of time to code as automated tests
  • Certain tests may be easier to execute manually because they are in a part of the system that is still changing pretty rapidly, so the cost of maintaining the automated tests would outweigh the ability to execute them quickly
  • Many tests may be more worthwhile to execute manually because the human brain can (and should!) be asking many kinds of questions that would not be captured in an automated test (e.g., Is this action taking longer than it should? Why does half the screen get inexplicably covered up with a big black square at this point? Hmmm, THAT seemed unusual; I’m going to check it out once I finish executing this test, etc.)
  • Saying the point above another way, humans can (and should) use “rich oracles” when they are testing (above and beyond the written “Expected Results”) more easily than coded logic within automated tests can
  • Certain tests (many in fact) are, when considered together with the other tests already existing in a test set, are extremely inefficient (e.g., because they are (a) both highly repetitive of other combinations that have already been tested and (b) lacking in much additional coverage that they achieve); such ineffective tests should generally neither be executed manually nor turned into automated tests
  • Testers may lack the skill or time to automate the manual tests well. In this case, it may be better to continue using manual scripts than invite the maintenance burden of poorly-written automation.
  • Automated manual tests are vulnerable to slight shifts in the product that aren’t actual bugs. They can become a maintainability problem. When they fail, they aren’t very diagnostic where tests that are more “white box” can be very diagnostic.
  • Tests that are easy for humans might be next to impossible to automate. “Is the sound in this video clear?” is a test even a fairly drunk human can do well, but a computer program to do it is nearly science-fiction level programming.
  • Tests that require hard copy or specific hardware interaction might not be good candidates for automation. Both can be simulated using other software pieces but then you have that other software that needs to be validated to make sure it is properly simulating the hardware.
  • If you need to do a highly complex set of configuration steps in order to automate a single test case, the manual test may be easiest. It also is probably an indicator of a rather rare test case which might not be worth putting into an automation suite.
  • Just as with doing development work, any artifacts created during the testing process must be evaluated for their worth as to whether or not they are reusable. If an automated test is not reusable beyond the first run, you’ve essentially spent resource to create something that is “thrown away” after use. A manual test can probably be executed once with a quick set of instructions in a document far more efficiently than spending the resource to automate it.
I hope this answer helps; I invite others to improve this incomplete list.
Conversely, signs that a manual test might be good to automate directly would be:
  • If the test has a very detailed script, including precise checks
  • If interesting bugs are rarely or never found outside of those specific checks when the manual scripts are run by skilled testers, OR if interesting bugs would be generally found much faster through ad-hoc testing without scripts
  • If the feature doesn’t change frequently in ways that would disrupt automation
  • If executing the manual scripts takes many hours of tester time
  • If automating those scripts won’t take longer than the amount of time expected to be spent running them manually, and there are currently no higher priority tasks
  • If executing the manual scripts is described as BORING by the testers running them, but not running them is not an option, even with ad-hoc testing for that feature. This is a strong sign that SOME sort of automation should be considered, possibly supplemented with ad-hoc testing, since bored testers are often bad testers. However, look to the other points to determine if it should be a direct port of the manual cases or a new test suite designed with automation in mind.
  • If the manual test is a mission/product critical item that must be regressed beyond the initial release date. Note that not all automation is regression automation but regression tests are one place where automation gets a great boost in ROI.
  • Along with the above point, even if the test is not a regression item, if the creation of an automated version of the manual test will add value down the line for other projects and/or processes, it’s worth creating the automation. Any artifacts created by the testing process that are re-usable more than once are not wasted effort.
  • If the manual tests are a list of smoke tests to execute with every build
Please improve this incomplete list also!

Source: Testing Excellence