Jul 12, 2013

Does Automating Your Manual Tests Give You Good Automated Tests?

Question ?


I notice that the tag wiki for the “automated testing” tag contains the following sentence: “Commonly, test automation involves automating a manual process already in place that uses a formalized testing process.”
This might be common, but is it good?
What pitfalls are there with directly automating manual test scripts, especially where those writing the automation have no knowledge of the system or domain? Is a test originally designed to be run manually going to be a good design for automation?

Answers

Answer by expert tester


My considered answer to this is a firm ‘No’ and I’d go further, in that I am of the opinion that blindly automating manual tests may be an automated testing anti-pattern.
Manual test scripts are executed by testers who are able to make decisions, judgements and evaluations. As a result, manual tests are often able to cover large areas of the system under test hitting a large number of test conditions. A manual test can easily become a large sprawling, description covering many areas of the application and this can be a very useful manual test. However, this would not be an advisable design for an automated test.
Automated tests that attempt to cover every point that a manual test covers tend to be brittle. They have a tendency to break more often and also annoyingly, an automated test will often stop completely when it hits a failure or an error meaning that later steps do not get run. This can mean that, in order to complete a testing run, some minor problem with a larger script can need to be resolved. In my experience, it’s far easier to have these assertions within separate tests, so that other tests can be run independently. Over time I have found that, large automated tests that attempt to literally replicate manual test scripts, become a considerable maintenance headache. Particularly, as they tend to frequently fall over when you really want to run them. The frustration can lead to major disillusionment with the automated test effort itself.
Blindly attempting to automate entire manual tests could also block gaining the maximum benefit from the test automation effort. A manual test in itself may not be easily automatable – but individual parts of it may be and by automating those, the manual test script could be reduced in size.
Manual test scripts tend to be most efficient when hitting as many areas of the application in the shortest amount of time possible. In contrast, automated test scripts tend to be more efficient when hitting as few areas of the application as needed.
It could be argued that one of the reasons why automation often ‘involves automating a manual process already in place that uses a formalized testing process’ is because automation is often introduced for the first time onto existing projects that already have manual tests, this may inevitably lead to the temptation to simply automate existing manual scripts. I would ask myself though, if the test had been automated from the outset, would I have designed the test the same was as I would a manual test? – I feel that my honest answer is ‘No’.
So although existing manual tests clearly can be useful documentation for automated testing in that they can show what is currently being tested and how, they should not dictate the design of the automated tests themselves. In my opinion, good manual test designs explain to a human how to test the features of the application most efficiently; good automated test designs make it easy for machine logic to exercise important features quickly and in my opinion, these are very different engineering problems.


Not necessarily. Here are some of the reasons why automating manual tests might not be advisable:
  • Certain tests may be easier to execute manually because they are very quick to execute manually but would take a disproportionate amount of time to code as automated tests
  • Certain tests may be easier to execute manually because they are in a part of the system that is still changing pretty rapidly, so the cost of maintaining the automated tests would outweigh the ability to execute them quickly
  • Many tests may be more worthwhile to execute manually because the human brain can (and should!) be asking many kinds of questions that would not be captured in an automated test (e.g., Is this action taking longer than it should? Why does half the screen get inexplicably covered up with a big black square at this point? Hmmm, THAT seemed unusual; I’m going to check it out once I finish executing this test, etc.)
  • Saying the point above another way, humans can (and should) use “rich oracles” when they are testing (above and beyond the written “Expected Results”) more easily than coded logic within automated tests can
  • Certain tests (many in fact) are, when considered together with the other tests already existing in a test set, are extremely inefficient (e.g., because they are (a) both highly repetitive of other combinations that have already been tested and (b) lacking in much additional coverage that they achieve); such ineffective tests should generally neither be executed manually nor turned into automated tests
  • Testers may lack the skill or time to automate the manual tests well. In this case, it may be better to continue using manual scripts than invite the maintenance burden of poorly-written automation.
  • Automated manual tests are vulnerable to slight shifts in the product that aren’t actual bugs. They can become a maintainability problem. When they fail, they aren’t very diagnostic where tests that are more “white box” can be very diagnostic.
  • Tests that are easy for humans might be next to impossible to automate. “Is the sound in this video clear?” is a test even a fairly drunk human can do well, but a computer program to do it is nearly science-fiction level programming.
  • Tests that require hard copy or specific hardware interaction might not be good candidates for automation. Both can be simulated using other software pieces but then you have that other software that needs to be validated to make sure it is properly simulating the hardware.
  • If you need to do a highly complex set of configuration steps in order to automate a single test case, the manual test may be easiest. It also is probably an indicator of a rather rare test case which might not be worth putting into an automation suite.
  • Just as with doing development work, any artifacts created during the testing process must be evaluated for their worth as to whether or not they are reusable. If an automated test is not reusable beyond the first run, you’ve essentially spent resource to create something that is “thrown away” after use. A manual test can probably be executed once with a quick set of instructions in a document far more efficiently than spending the resource to automate it.
I hope this answer helps; I invite others to improve this incomplete list.
Conversely, signs that a manual test might be good to automate directly would be:
  • If the test has a very detailed script, including precise checks
  • If interesting bugs are rarely or never found outside of those specific checks when the manual scripts are run by skilled testers, OR if interesting bugs would be generally found much faster through ad-hoc testing without scripts
  • If the feature doesn’t change frequently in ways that would disrupt automation
  • If executing the manual scripts takes many hours of tester time
  • If automating those scripts won’t take longer than the amount of time expected to be spent running them manually, and there are currently no higher priority tasks
  • If executing the manual scripts is described as BORING by the testers running them, but not running them is not an option, even with ad-hoc testing for that feature. This is a strong sign that SOME sort of automation should be considered, possibly supplemented with ad-hoc testing, since bored testers are often bad testers. However, look to the other points to determine if it should be a direct port of the manual cases or a new test suite designed with automation in mind.
  • If the manual test is a mission/product critical item that must be regressed beyond the initial release date. Note that not all automation is regression automation but regression tests are one place where automation gets a great boost in ROI.
  • Along with the above point, even if the test is not a regression item, if the creation of an automated version of the manual test will add value down the line for other projects and/or processes, it’s worth creating the automation. Any artifacts created by the testing process that are re-usable more than once are not wasted effort.
  • If the manual tests are a list of smoke tests to execute with every build
Please improve this incomplete list also!

Source: Testing Excellence

Why Automated Tests Didn’t Find That Bug?

It is widely accepted that the purpose of automated tests is not to find new defects but rather find regression bugs as new features are developed. But even so, there are many occasions where regression bugs slip through that end up in production and really should have been caught by the automated regression tests.

Let’s examine the reasons why automated tests failed to find the regression bugs:
1 – The Scenario Was Not Though Of
Automated Tests are as good as the person who designed the tests. Automated Tests are a set of instructions that are executed against the target application. First, test analysts design test cases and come up with a list of scenarios. The scenarios are then scripted in a programming language to execute the scenarios automatically. Therefore, if a particular scenario was not thought of, it wouldn’t have been scripted to run as automated test.

2 – The Scenario Was Thought Of But Was Not Scripted
It takes time to automate tests. Depending on the complexity of the tests, test engineers coding skills, flexibility of test automation tools and frameworks, some tests take a long time to automate and hence miss the chance to find regression bugs as new features are developed.

3 – There is a Bug in the Test Code Itself
There are situations where the automated tests do not really run against the tester’s intentions or assumptions. In other words the automation engineers made a mistake in coding the tests or did not insert verification points in correct places.

4 – The Automated Tests Couldn’t Execute due to Environment Issues
This is particularly true when running System Tests via application UI, e.g. launching the application in browser. In such cases there are many dependencies on other applications, 3rd party or downstream systems, and if any of these are down or not responding or respond intermittently, the automated tests could not execute successfully and hence could not verify correctness of a particular test.

5 – Poor Analysis of Test Reports
After automated tests have executed, when there are failed tests, analysis is required to see the reason for the failed cases. This can take quite some time to analyze all the failed cases (many of which can be false positive). The analysis part is normally done manually and if the analysis is not done correctly, there could be genuine failures that are overlooked or masked by other issues.
Can you think of other issues why automated tests miss defects?

Source: Testing excellence