Sep 25, 2012

What are the key things in defects in software testing?

There are two key things in the defect of the software testing. They are:
1)     Severity
2)     Priority



1)  Severity:
It is the extent to which the defect can affect the software. In other words it defines the impact that a given defect has on the system. For example: If an application or web page crashes when a remote link is clicked, in this case clicking the remote link by an user is rare but the impact of  application crashing is severe. So the severity is high but priority is low.
Severity can be of following types:
  • Critical: The defect that results in the termination of the complete system or one or more component of the system and causes extensive corruption of the data. The failed function is unusable and there is no acceptable alternative method to achieve the required results then the severity will be stated as critical.
  • Major: The defect that results in the termination of the complete system or one or more component of the system and causes extensive corruption of the data. The failed function is unusable but there exists an acceptable alternative method to achieve the required results then the severity will be stated as major.
  • Moderate: The defect that does not result in the termination, but causes the system to produce incorrect, incomplete or inconsistent results then the severity will be stated as moderate.
  • Minor: The defect that does not result in the termination and does not damage the usability of the system and the desired results can be easily obtained by working around the defects then the severity is stated as minor.
  • Cosmetic: The defect that is related to the enhancement of the system where the changes are related to the look and field of the application then the severity is stated as cosmetic.
2)  Priority:
Priority defines the order in which we should resolve a defect. Should   we fix it now, or can it wait? This priority status is set by the tester to the developer mentioning the time frame to fix the defect. If high priority is mentioned then the developer has to fix it at the earliest. The priority status is set based on the customer requirements. For example: If the company name is misspelled in the home page of the website, then the priority is high and severity is low to fix it.
Priority can be of following types:
  • Low: The defect is an irritant which should be repaired, but repair can be deferred until after more serious defect have been fixed.
  • Medium: The defect should be resolved in the normal course of development activities. It can wait until a new build or version is created.
  • High: The defect must be resolved as soon as possible because the defect is affecting the application or the product severely. The system cannot be used until the  repair has been done.
Few very important scenarios related to the severity and priority which are asked during the interview:
High Priority & High Severity: An error which occurs on the basic functionality of the application and will not allow the user to use the system. (Eg. A site maintaining the student details, on saving record if it, doesn’t allow to save the record then this is high priority and high severity bug.)
High Priority & Low Severity: The spelling mistakes that happens on the cover page or heading or title of an application.
High Severity & Low Priority: An error which occurs on the functionality of the application (for which there is no workaround) and will not allow the user to use the system but on click of link which is rarely used by the end user.
Low Priority and Low Severity: Any cosmetic or spelling issues which is within a paragraph or in the report (Not on cover page, heading, title).

Source:QAWIKI

Sep 24, 2012

What are the advantages or benefits of using testing tools?

There are many benefits that can be gained by using tools to support testing. They are:
  • Reduction of repetitive work: Repetitive work is very boring if it is done manually. People tend to make mistakes when doing the same task over and over. Examples of this type of repetitive work include running regression tests, entering the same test data again and again (can be done by a test execution tool), checking against coding standards (which can be done by a static analysis tool) or creating a specific test database (which can be done by a test data preparation tool).
  • Greater consistency and repeatability: People have tendency to do the same task in a slightly different way even when they think they are repeating something exactly. A tool will exactly reproduce what it did before, so each time it is run the result is consistent.
  • Objective assessment: If a person calculates a value from the software or incident reports, by mistake they may omit something, or their own one-sided preconceived judgments or convictions may lead them to interpret that data incorrectly. Using a tool means that subjective preconceived notion is removed and the assessment is more repeatable and consistently calculated. Examples include assessing the cyclomatic complexity or nesting levels of a component (which can be done by a static analysis tool), coverage (coverage measurement tool), system behavior (monitoring tools) and incident statistics (test management tool).
  • Ease of access to information about tests or testing: Information presented visually is much easier for the human mind to understand and interpret. For example, a chart or graph is a better way to show information than a long list of numbers – this is why charts and graphs in spreadsheets are so useful. Special purpose tools give these features directly for the information they process. Examples include statistics and graphs about test progress (test execution or test management tool), incident rates (incident management or test management tool) and performance (performance testing tool).
Source:Wiki Testing

Which Life Cycle Is Best for Your Project?

Which life cycle will work best for your project? This is an important strategic question because making the wrong choice could lead to disastrous results of catastrophic proportions. Think about delayed deliveries, unhappy clients, project overruns, and cancelled projects.
During the 80's and early 90's, the waterfall model was the de-facto in project delivery. With the rapid pace in software development and popular use of the Internet, many companies started shifting to more flexible life cycles such as the iterative, incremental, spiral, and agile. These new life cycle methods provide more flexibility and support fast-paced development, giving companies the edge in delivering "the first" in the industry. To date, there are dozens of life cycle methods available to choose from, each having its own advantages and disadvantages.
Here are some of the more popular life cycles:

Waterfall

This traditional life cycle method has been around for decades and has proven its ability to deliver. In fact, the US Department of Defence was actively promoting the use of this method in all its projects when it published Standard 2167A in 1998.
Waterfall is defined as a sequential development model with clearly defined deliverables for every phase. Many industry practitioners are strict in performing audit reviews to ensure that the project has satisfied the input criteria before continuing to the next phase.
The standard phases of waterfall are shown in the diagram below:
Waterfall Development Method

Iterative, Incremental

The main objective of iterative development is to build the system incrementally, starting from basic partial system features and gradually adding more features until the entire system is completed. Compared to waterfall, iterative development allows flexibility in accommodating new requirements or changes thereof. It also provides room for improvement in succeeding iterations based on lessons learned from previous iterations.
The diagram below, courtesy of Microsoft's MSF, clearly shows how iterations are scheduled and delivered:
Iterative Incremental Development Method

Agile

Agile methodologies arose from the need to develop software applications that could accommodate the fast-paced evolution of the Internet. Agile is, in some way, a variant of iterative life cycle where deliverables are submitted in stages. The main difference is that agile cuts delivery time from months to weeks. Companies practicing agile are delivering software products and enhancements in weeks rather than in months. Moreover, the agile manifesto covered development concepts aside from the delivery life cycle, such as collaboration, documentation, and others.
The diagram from Microsoft MSF shows the various components of an agile life cycle:
Agile Development Method

Other Variants

There are more life cycle methods and methodologies being practiced including Test Driven Development, RUP, Cleanroom, and others. However, all these life cycles can be generally classified into waterfall, being sequential, with clear and strict cut-off between phases; as well as iterative or agile, being repetitive with flexible cut-off rules.
Here are some questions you need to get answers to before deciding on which life cycle method to use:

How stable are the requirements?

One of the biggest factors that dictate your choice of a life cycle method is the clarity and stability of the project requirements. Frequent changes in requirements after the project has started can derail your progress against the plan. In such cases, choose agile or iterative approach because each provides an opportunity for you to accommodate new requirements even after the project has started. On the other hand, if you are engaged in a more traditional project development where there is a stiff rule on ensuring complete set of requirements before going on to the next phase, waterfall would be your choice. However, such traditional projects are becoming less and less common as companies realise the benefits of using a more agile method of managing projects.

Who are the end-users of the system?

Spend some time to know the users and stakeholders. Who are they? Are they dispersed or controlled group? How can they influence the project? A controlled group of end-users who greatly influence the project can help you define requirements and manage changes. This means you can achieve stability on project requirements and allow you to use the waterfall approach.
On the other hand, if the end-users are dispersed, you are likely to have a wide range of requirements, which you can't define until the end-users have used the system and started requesting new features. This situation is typical in product development. For example, Google started Gmail and all its products such as Google Docs, Calendar as BETA because they wanted to know the reactions of the end-users and improve the features based on their feedback. Microsoft, the developer of the world's most popular software, Windows and Office, also applies agile in their development methodologies. Recently, the Microsoft Solution Framework (MSF) adopted the agile approach. According to MSF for Agile Software Development, "small iterations allow you to reduce the margin of error in your estimates and provide fast feedback about the accuracy of your project plans. Each iteration should result in a stable portion of the overall system." Microsoft and Google choose to be more agile because they have a very dispersed group of end-users.

Is the time line aggressive or conservative?

Experienced managers will solve aggressive time lines by negotiating and cutting down project deliverables. An iterative approach helps achieve this by giving opportunities to deliver partial functionalities early. This gives an impression that the project is delivering despite an aggressive time line, generally referred to as "quick wins." While the overall project delivery is not shortened, there is an opportunity for you to satisfy your stakeholders by delivering key features that are necessary. If your project is not time sensitive and end-users can wait for the release of the system, waterfall would be a workable approach.

What is the size of the project?

Large enterprise projects generally require large number of project teams to work on clearly defined deliverables. The scale of the deliverables is proportional to the size of the project team assigned to do it. Thus, larger project teams are assigned larger set of deliverables which need to be clearly defined. With this kind of scenario, long iterations or waterfall would be more ideal.

Where are the project teams located?

If you have several project teams located in different geographic locations, co-ordination of work needs to be more detailed and stringent. Work assignments need to be well-defined to avoid confusion and redundancy of work. In such cases, Waterfall is likely more beneficial as it provides clear-cut deliverables and milestones. Applying the agile approach on geographically separate teams may introduce new challenges. As noted by Martin Fowler, a well-known agile evangelist, "Because agile development works best with close communication and an open culture, agilists working offshore feel the pain much more than those using plan-driven approaches."

What are the critical resources?

Some projects require involvement of unique, skilled resource or integration with highly specialised equipment. In cases where such resources are not immediately available and require planning, the project team must ensure that the resource is fully utilised during its scheduled use. Moreover, tests must be performed on all possible scenarios during the resource's available time. Otherwise, requesting for another schedule of the resource may entail project delays. In such cases, waterfall may be a better approach where each milestone must be completed before proceeding from one stage to the next and you are assured that the critical resource is well utilised.

Source: Wiki

Sep 17, 2012

Importance of Software Testing

In Internet, We can see lot of articles explaining/listing loss made by poor low-quality software products.

How will you feel if a bug in bank software shows your bank balance as 0 instead of some thousands?
And if you are a student, what will be your state if your mark-sheet shows your score as 0 instead of some good score?

Here, we will be feeling good if we see some notification or message instead of seeing wrong data.

For example, a message such as “Not able to show your balance due to some unexpected error “ will do more goodness than showing balance as 0 in our first example.

Similarly, a message such as “Couldn't print your mark-sheet because of unexpected issue” will be useful than showing score as 0 in our second example.

Testing plays an important role to avoid these situations.

So, we can say that testing is necessary or important even when it couldn't guarantee 100% error free software application.

i-e Testing may not fix the issues, but definitely will help to provide improved user-friendliness.

Also,

- Cost of fixing the bug will be more if it is found in later stage than it is found earlier.

- Quality can be ensured by testing only. In the competitive market,only Quality product can exist for long time.

Testing will be necessary even if it is not possible to do 100% testing for an application.

One more important reason for doing testing is user/production environment will be completely different from development environment.

For example, a webpage developer may be using FireFox as browser for doing his webpage development. But the user may be using different browser such as Internet Explorer, Safari, Chrome and Opera.

The web page appearing good in FireFox may not appear good in other browsers (particularly IE). So ultimately, user will not be happy even if the developer put more efforts to develop the webpage. As we know that Users satisfaction is more important for growth of any business, testing becomes more important.
So we can assume the Testers as the representatives of the Users.

Writing Good Test Cases and Finding Bugs effectively

To develop bug free software application, writing good test cases is essential.

Here, we will see how to write good test cases.

Before seeing this, we should understand what is Good Test Case.

There won't be any solid definition for "Good Test Case".

I will consider a Test Case as "Good" only when a Tester feels happy to follow the steps in the Test Case which is written by another Tester.

Because, Test Cases will be useful only when it is used by the people.
If a test case is poorly written with excessive unwanted steps, then most of the Testers won't read it fully. Just they will read few lines and will execute it based on their own understanding which will be mostly wrong.

On the other hand, if it is having less details then it is difficult to execute it.

As of now, I am thinking below things for writing effective Test Cases.
  • Before start writing test cases, become familiar with the (AUT) Application Under Test. You will become familiar with Application by doing some adhoc/exploratory testing.

  • We should read the requirements clearly and completely. If we have any questions in the Requirements it should be clarified by appropriate person (e.g Customer or Business Team). And also, it is good practice to gather some basic domain knowledge before getting into reading requirements and writing Test Cases. And also, we can have discussion/meeting with developers/business team

  • Very Important thing is, we should use only simple language or style to write the Test cases so that any one can easily understand without any ambiguity

  • Give meaningful and easily understandable Test case ID/number.
    For example, if you are writing Test case for testing Login module you can Test Case ID as below.

    1a - for testing positive scenario such as entering valid username and valid password.
    1b - for testing negative scenario such as entering invalid username and invalid password.

    By giving Test Case number as above instead of giving sequential number, we can easily add any new case such as below one without needing to adjust/modify number of any other subsequent test cases.

    1c- for testing negative scenario such as entering valid username and invalid password.

  • And also, if we have any similar module we can give separate sequence number for specifying the module.

    For example, assume that we are having separate login modules for User and Admin with little changes.
    In this case we can give number as below,
    1.1-First case in User module.
    1.2-Second case in User module.
    2.1-First case in Admin module
    2.2-Second case in Admin module.

    If Test Description/Steps/Expected Results of 2.1 is mostly same as 1.1 then we should refer 1.1 in 2.1 instead writing the sample details again.

    By doing like this, we can avoid redundant details to have clear test case document.

  • Test Description should be short and it should uniquely represent the current test scenario without any ambiguity.

  • In any situation, don't use "if condition" in the Test steps. Always address only one scenario in one test case. It will be helpful to have unambiguous Expected Result.

  • Give some sample test data that will be useful for executing the test cases.

  • If the Test Case requires any Preconditions/prerequisite don't forget to mention them.
    The best way is, we should arrange/order the Test Cases such that the need for specifying precondition is minimum.

    For example, we need to write test case for testing user creation, user modification and user deletion.

    For doing user modification and user deletion we should have already created user as precondition.

    If we arrange the Test cases in below mentioned order, we can avoid the need for specifying any preconditions/prerequisites.
    1-Test case for creating user.
    2-Test case for verifying duplicate/existing user when adding another user with same username.
    3-Test case for modifying user.
    4-Test case for deleting user.

  • Keep Traceability Matrix to make sure that we have written test cases for covering all requirements.

  • Once after completing all positive scenarios, think about all possibilities of negative scenarios to have test cases which will effectively find most of the bugs.

    For doing this we can refer alternate flow section of use case document, and we can think about different data, boundary conditions, different navigations paths and multi user environment.

  • In the test case document, we can give link to screenshots explaining the steps and/or expected results with pictures. But anyway, it is not good practice to place the screenshots within the Test Case document itself unless it is very essential

  • Many tools are available to capture the screenshots with user action as video. We can make use of them to keep video explaining the steps and expected results clearly in case the test case requires any complex steps. We can give link to this video from the test case document.

FINDING A BUG

FINDING A BUG IN WEB PAGE
The first step to making a testcase is finding a bug in the first place. There are four ways of doing this:
1. Letting someone else do it for you: Most of the time, the testcases you write will be for bugs that other people have filed. In those cases, you will typically have a Web page which renders incorrectly, either a demo page or an actual Web site. However, it is also possible that the bug report will have no problem page listed, just a problem description.

2. Alternatively, you can find a bug yourself while browsing the Web. In such cases, you will have a Web site that renders incorrectly.

3. You could also find the bug because one of the existing testcases fails. In this case, you have a Web page that renders incorrectly.

4. Finally, the bug may be hypothetical: you might be writing a test suite for a feature without knowing if the feature is broken or not, with the intention of finding bugs in the implementation of that feature. In this case you do not have a Web page, just an idea of what a problem could be.

Source:Blogger Testing

Sep 14, 2012

Test Process

In order to gain the most of the testing activities, a defined process must be followed. But before any testing activity begins, much of the effort should be spent on producing a good test plan. A good test plan goes a long way in ensuring that the testing activities are adhered to what the testing is trying to achieve.
This is a test process that is documented in the standard BS7925-2 Software Component Testing. It therefore relates most closely to component testing but is considered general enough to apply to all levels of testing (i.e. component, integration in the small, system, integration in the large, and acceptance testing).
It is perhaps most applicable to a fairly formal testing environment (such as mission critical). Most commercial organisations have less rigorous testing processes. However, any testing effort can use these steps in some form.
The Fundamental Test Process comprises five activities: Planning, Specification, Execution, Recording, and Checking for Test Completion. The test process always begins with Test Planning and ends with Checking for Test Completion. Any and all of the activities may be repeated (or at least revisited) since a number of iterations may be required before the completion criteria defined during the Test Planning activity are met. One activity does not have to be finished before another is started; later activities for one test case may occur before earlier activities for another. During this cycle of activities, all the while, the progress of activities need to be monitored and controlled so that we stay in line with the test plan.
The five activities are described in more detail below.
Planning
The basic philosophy is to plan well. All good testing is based upon good test planning. There should already be an overall test strategy and possibly a project test plan in place. This Test Planning activity produces a test plan specific to a level of testing (e.g. system testing). These test level specific test plans should state how the test strategy and project test plan apply to that level of testing and state any exceptions to them. When producing a test plan, clearly define the scope of the testing and state all the assumptions being made. Identify any other software required before testing can commence (e.g. stubs & drivers, word processor, spreadsheet package or other 3rd party software) and state the completion criteria to be used to determine when this level of testing is complete.
Example completion criteria are (some are better than others and using a combination of criteria is usually better than using just one):
  • 100% statement coverage;
  • 100% requirement coverage;
  • all screens I dialogue boxes I error messages seen;
  • 100% of test cases have been run;
  • 100% of high severity faults fixed;
  • 80% of low & medium severity faults fixed;
  • maximum of 50 known faults remain;
  • maximum of 10 high severity faults predicted;
  • time has run out;
  • testing budget is used up.
Specification
The fundamental test process describes this activity as designing the test cases using the techniques selected during planning. For each test case, specify its objective, the initial state of the software, the input sequence and the expected outcome. Since this is a little vague we have broken down the Test Specification activity into three distinct tasks to provide a more helpful explanation. (Note that this more detailed explanation of the Test Specification is not a part of the Foundation syllabus.)
Specification can be considered as three separate tasks:
  • Identify test conditions
  • Design test cases – determine ‘how’ the identified test conditions are going to be exercised;
  • Build test cases – implementation of the test cases (scripts, data, etc.).
Execution
The purpose of this activity is to execute all of the test cases (though not necessarily all in one go). This can be done either manually or with the use of a test execution automation tool (providing the test cases have been designed and built as automated test cases in the previous stage).
The order in which the test cases are executed is significant. The most important test cases should be executed first. In general, the most important test cases are the ones that are most likely to find the most serious faults but may also be those that concentrate on the most important parts of the system.
There are a few situations in which we may not wish to execute all of the test cases. When testing just fault fixes we may select a subset of test cases that focus on the fix and any likely impacted areas (most likely all the test cases will have been run in a previous test effort). If too many faults are found by the first few tests we may decide that it is not worth executing the rest of them (at least until the faults found so far have been fixed). In practice time pressures may mean that there is time to execute only a subset of the specified test cases. In this case it is particularly important to have prioritised the test cases to ensure that at least the most important ones are executed.
If any other ideas for test conditions or test cases occur they should be documented where they can be considered for inclusion.
Recording
In practice the Test Recording activity is done in parallel with Test Execution. To start with we need to record the versions of the software under test and the test specification being used. Then for each test case we should record the actual outcome and the test coverage levels achieved for those measures specified as test completion criteria in the test plan. In this way we will be marking off our progress. The test record is also referred to as the “test log”, but “test record” is the terminology in the syllabus. Note that this has nothing to do with the recording or capturing of test inputs that some test tools perform!
The actual outcome should be compared against the expected outcome and any discrepancy found logged and analysed in order to establish where the fault lies. It may be that the test case was not executed correctly in which case it should be repeated. The fault may lie in the environment set-up or be the result of using the wrong version of software under test. The fault may also lie in the specification of the test case: for example, the expected outcome could be wrong. Of course the fault may also be in the software under test! In these cases the fault should be fixed and the test case executed again.
The records made should be detailed enough to provide an unambiguous account of the testing carried out. They may be used to establish that the testing was carried out according to the plan.
Checking for Completion
This activity has the purpose of checking the records against the completion criteria specified in the test plan. If these criteria are not met, it will be necessary to go back to the specification stage to specify more test cases to meet the completion criteria. There are many different types of coverage measure and different coverage measures apply to different levels of testing.
Comparison of the five activities
Comparing these five activities of the Fundamental Test Process it is easy to see that the first two activities (Test Planning and Test Specification) are intellectually challenging. Planning how much testing to do, determining appropriate completion criteria, etc. requires careful analysis and thought. Similarly, specifying test cases (identifying the most important test conditions and designing good test cases) requires a good understanding of all the issues involved and skill in balancing them. It is these intellectual tasks that govern the quality of test cases.
The next two activities (Test Execution and Test Recording) involve predominantly clerical tasks. Furthermore, executing and recording are activities that are repeated many times whereas the first two activities, Test Planning and Test Specification are performed only once (they may be revisited if the completion criteria are not met the first time around but they are not repeated from scratch). The Test Execution and Test Recording activities can be largely automated and there are significant benefits in doing so.
Source:Excellence Testing

Test Strategy VS Test Plan

Test Strategy
A Test Strategy document is a high level document and normally developed by project manager. This document defines “Testing Approach” to achieve testing objectives. The Test Strategy is normally derived from the Business Requirement Specification document.
The Test Stategy document is a static document meaning that it is not updated too often. It sets the standards for testing processes and activities and other documents such as the Test Plan draws its contents from those standards set in the Test Strategy Document.
Some companies include the “Test Approach” or “Strategy” inside the Test Plan, which is fine and it is usually the case for small projects. However, for larger projects, there is one Test Strategy document and different number of Test Plans for each phase or level of testing.
Components of the Test Strategy document
  • Scope and Objectives
  • Business issues
  • Roles and responsibilities
  • Communication and status reporting
  • Test deliverability
  • Industry standards to follow
  • Test automation and tools
  • Testing measurements and metrices
  • Risks and mitigation
  • Defect reporting and tracking
  • Change and configuration management
  • Training plan
Test Plan
The Test Plan document on the other hand, is derived from the Product Description, Software Requirement Specification SRS, or Use Case Documents.
The Test Plan document is usually prepared by the Test Lead or Test Manager and the focus of the document is to describe what to test, how to test, when to test and who will do what test.
It is not uncommon to have one Master Test Plan which is a common document for the test phases and each test phase have their own Test Plan documents.
There is much debate, as to whether the Test Plan document should also be a static document like the Test Strategy document mentioned above or should it be updated every often to reflect changes according to the direction of the project and activities.
My own personal view is that when a testing phase starts and the Test Manager is “controlling” the activities, the test plan should be updated to reflect any deviation from the original plan. After all, Planning and Control are continuous activities in the formal test process.
  • Test Plan id
  • Introduction
  • Test items
  • Features to be tested
  • Features not to be tested
  • Test techniques
  • Testing tasks
  • Suspension criteria
  • Features pass or fail criteria
  • Test environment (Entry criteria, Exit criteria)
  • Test delivarables
  • Staff and training needs
  • Responsibilities
  • Schedule
This is a standard approach to prepare test plan and test strategy documents, but things can vary company-to-company

7 Deadly Sins of Automated Software Testing

1. Gluttony

Over indulging on propriety testing tools
Many commercial testing tools provide simple features for automating the capture and replay of manual test cases. While this approach seems sound, it encourages testing through the user-interface and results in inherently brittle and difficult to maintain tests. Additionally, the cost and restrictions that licensed tools place on who can access the test cases is an overhead that tends to prevent collaboration and team work. Furthermore, storing test cases outside the version control system creates unnecessary complexity. As an alternative, open source test tools can usually solve most automated testing problems and the test cases can be easily included in the version control system.

2. Sloth

Too lazy to setup CI server to  execute tests
The cost and rapid feedback benefits of automated tests are best realised when the tests are regularly executed. If your automated tests are initiated manually rather than through the CI continuous integration system then there is significant risk that they are not being run regularly and therefore may in fact be failing. Make the effort to ensure automated tests are executed through the CI system.

3. Lust

Loving the UI so much that all tests are executed through the UI
Although automated UI tests provide a high level of confidence, they are expensive to build, slow to execute and fragile to maintain. Testing at the lowest possible level is a practice that encourages collaboration between developers and testers, increases the execution speed for tests and reduces the test implementation costs. Automated unit tests should be doing a majority of the test effort followed by integration, functional, system and acceptance tests. UI based tests should only be used when the UI is actually being tested or there is no practical alternative.

4. Envy

Jealously creating tests and not collaborating
Test driven development is an approach to development that is as much a design activity as it is a testing practice. The process of defining test cases (or executable specifications) is an excellent way ensuring that there is a shared understanding between all involved as to the actual requirement being developed and tested. The practice is often associated with unit testing but can be equally applied to other test types including acceptance testing.

5. Rage

Frustration with fragile tests that break intermittently
Unreliable tests are a major cause for teams ignoring or losing confidence in automated tests. Once confidence is lost the value initially invested in automated tests is dramatically reduced. Fixing failing tests and resolving issues associated with brittle tests should be a priority to eliminate false positives.

6. Pride

Thinking automated tests will replace all manual testing
Automated tests are not a replacement for manual exploratory testing. A mixture of testing types and levels is needed to achieve the desired quality mitigate the risk associated with defects. The automated testing triangle originally described by Mike Cohn explains the investment profile in tests should focus at the unit level and then reduce up through the application layers.

7. Avarice (Greed)

Too much automated testing not matched to defined system quality
Testing effort needs to match the desired system quality otherwise there is a risk that too-much, too-little or not the right things will be tested. It is important to define the required quality and then match the testing effort to suit. This approach can be done in collaboration with business and technical stakeholders to ensure there is a shared understanding of risks and potential technical debt implications.

Summary

Automated testing is not without pitfalls and some of these are identified here. There are of course many other anti-patterns for automated testing that justify further discussion.

Source:Testing Excellence