Dec 15, 2011

Does Quality Assurance (QA) Remove Need for Quality Control (QC)?


“If QA (Quality Assurance) is done then why do we need to perform QC (Quality Control)?”, this thought may come to our mind some times and looks a valid point too.  This means if we have followed all the pre-defined processes, policies and standards correctly and completely then why do we need to perform a round of QC?


In my opinion QC is required after QA is done.
  While in ‘QA’ we define the processes, policies, strategies, establish standards, developing checklists etc. to be used and followed through out the life cycle of a project. And while in QC we follow all those defined processes, standards and policies to make sure that the project has been developed with high quality and at least meets customer’s expectations.

QA does not assure quality, rather it creates and ensures the processes are being followed to assure quality. QC does not control quality, rather it measures quality.

QC measurement results can be utilized to correct/modify QA processes which can be successfully implemented in new projects as well.
Quality control activities are focused on the deliverable itself. Quality assurance activities are focused on the processes used to create the deliverable. 

QA and QC are both powerful techniques which can be used to ensure that the deliverables meet high quality expectations of customers.

E.g.: we have to use an Issue tracking system to log the bugs during testing a web application. QA would include defining the standard for adding a bug and what all details should be there in a bug, like summary of the issue, where it is observed, steps to reproduce the bugs, screenshots etc. This is a process to create deliverable ‘bug–report’. When a bug is actually added in issue tracking system based on these standards then that bug report is our deliverable.
Now, suppose some time at later stage of project we realize that adding ‘probable root cause’ to the bug based on tester’s analysis would provide some more insight to the Dev team, then we will update our pre-defined process and finally it will be reflected in our bug reports as well. This is how QC gives inputs to QA to further improve the QA.

Following is an example of a real life scenario for QA / QC:
QA Example:

Suppose our team has to work on completely new technology for upcoming project. Our team members are new to the technology. So for that we need to create a plan for training the team members in the new technology. Based on our knowledge we need to collect pre-requisites like understanding documents, design of the product along with the documents etc. and share with the team, which would be helpful while working on the new technology and even would be useful for any new comer in the team. This is QA.

QC Example:

Once the training is done how we can make sure that the training was successfully done for all the team members? For this purpose we will have to collect statistics e.g. number of marks the trainees got in each subject and minimum number of marks expected after completing the training. Also we can make sure that everybody has taken training in full by verifying the attendance record of candidates. If the number of marks of candidates is up to the expectations of the trainer/evaluators then we can say that the training is successful otherwise we will have to improve our process in order to deliver high quality training.

Hope this explains the difference between QA and QC.
 

SDLC

SOFTWARE DEVELOPMENT LIFE CYCLE [SDLC]
Information:
Software Development Life Cycle, or Software Development Process, defines the steps/stages/phases in the building of software.

There are various kinds of software development models like:

Waterfall model
Spiral model
Iterative and incremental development (like ‘Unified Process’ and ‘Rational Unified Process’)
Agile development (like ‘Extreme Programming’ and ‘Scrum’)

Models are evolving with time and the development life cycle can vary significantly from one model to the other. It is beyond the scope of this particular article to discuss each model. However, each model comprises of all or some of the following phases/activities/tasks.


SDLC IN SUMMARY

Project Planning
Requirements Development
Estimation
Scheduling
Design
Coding
Test Build/Deployment
Unit Testing
Integration Testing
User Documentation
System Testing
Acceptance Testing
Production Build/Deployment
Release
Maintenance


SDLC IN DETAIL

Project Planning

Prepare
Review
Rework
Baseline
Revise [if necessary] >> Review >> Rework >> Baseline

Requirements Development [Business Requirements and Software/Product Requirements]

Develop
Review
Rework
Baseline
Revise [if necessary] >> Review >> Rework >> Baseline

Estimation [Size / Effort / Cost]

Prepare
Review
Rework
Baseline
Revise [if necessary] >> Review >> Rework >> Baseline


Scheduling

Prepare
Review
Rework
Baseline
Revise [if necessary] >> Review >> Rework >> Baseline
Designing[ High Level Design and Detail Design]


Coding

Code
Review
Rework
Commit
Recode [if necessary] >> Review >> Rework >> Commit


Test Builds Preparation/Deployment

Build/Deployment Plan
Prepare
Review
Rework
Baseline
Revise [if necessary] >> Review >> Rework >> Baseline
Build/Deploy


Unit Testing

Test Plan

Prepare
Review
Rework
Baseline
Revise [if necessary] >> Review >> Rework >> Baseline

Test Cases/Scripts

Prepare
Review
Rework
Baseline
Execute
Revise [if necessary] >> Review >> Rework >> Baseline >> Execute


Integration Testing

Test Plan

Prepare
Review
Rework
Baseline
Revise [if necessary] >> Review >> Rework >> Baseline

Test Cases/Scripts

Prepare
Review
Rework
Baseline
Execute
Revise [if necessary] >> Review >> Rework >> Baseline >> Execute


User Documentation

Prepare
Review
Rework
Baseline
Revise [if necessary] >> Review >> Rework >> Baseline


System Testing

Test Plan

Prepare
Review
Rework
Baseline
Revise [if necessary] >> Review >> Rework >> Baseline

Test Cases/Scripts

Prepare
Review
Rework
Baseline
Execute
Revise [if necessary] >> Review >> Rework >> Baseline >> Execute

Acceptance Testing [ Internal Acceptance Test and External Acceptance Test]

Test Plan

Prepare
Review
Rework
Baseline
Revise [if necessary] >> Review >> Rework >> Baseline

Test Cases/Scripts

Prepare
Review
Rework
Baseline
Execute
Revise [if necessary] >> Review >> Rework >> Baseline >> Execute

Production Build/Deployment
Build/Deployment Plan
Prepare
Review
Rework
Baseline
Revise [if necessary] >> Review >> Rework >> Baseline
Build/Deploy

Release
Prepare
Review
Rework
Release


Maintenance
Recode [Enhance software / Fix bugs]
Retest
Redeploy
Rerelease

Notes:
The life cycle mentioned here is NOT set in stone and each phase does not necessarily have to be implemented in the order mentioned.
Though SDLC uses the term ‘Development’, it does not focus just on the coding tasks done by developers but incorporates the tasks of all stakeholders, including testers.
There may still be many other activities/ tasks which have not been specifically mentioned above, like Configuration Management. No matter what, it is essential that you clearly understand the software development life cycle your project is following. One issue that is widespread in many projects is that software testers are involved much later in the life cycle, due to which they lack visibility and authority (which ultimately compromises software quality).

Dec 14, 2011

Application Testing


Application Testing – Into the Basics of Software Testing!
Topics we will cover in this article:
- Application Testing
- Categories of Applications
- Application Testing Methodologies
- Application Testing Tools
- Software Test Plan
- Application Testing Cycles
- Application Testing – Best Practices
Application Testing is an activity that every software tester performs daily in his career. These two words are extremely broad in practical aspect. However, only the core and most important areas will be discussed here. The purpose of this article is to touch all the primary areas so that the readers will get all the basic briefing at a single place.
Categories of Applications
Whether it is small calculator software with only the basic arithmetic operations, or an online enterprise solution; there are two categories of applications.
a. Desktop
b. Web
For desktop applications, testing should take into account the UI, business logic, database, reports, roles and rights, integrity, usability and data flow. For web applications, along with all these major areas; testers should give sufficient importance to performance, load and security of the application. So AUT is either desktop software or a website.

Application Testing Methodologies
This is a well-known and well discussed aspect; there are only 3 universally accepted methodologies;
a. Black Box: In black-box testing, the AUT is validated against its requirements considering the inputs and expected outputs, regardless of how the inputs are transformed into outputs. Testers are least concerned with internal structure or code that implements the business logic of the application. There are four primary techniques to design test cases for black box testing;
i. BVA (Boundary value Analysis)
ii. EP (Equivalence Partitioning)
iii. Decision Tables
iv. State Transition Tables (and diagrams)
a. White Box: Primary focus of this methodology is to validate, how the business logic of application is implemented by code. Internal structure of the application is tested and the techniques available to do so are;
i. Code Coverage
ii. Path Coverage
Both the above listed techniques contain several other strategies that may be discussed in some other article. Some techniques are discussed in ‘Test Case Design Techniques’ topic.
a. Grey Box: Practically speaking, this is a mixture of black box and white box. In this methodology, mainly the tester tests the application as in black box. But, for some business critical or vulnerable modules of application; testing is done as white box.
Application Testing Tools
According to the best of my knowledge, there are at least 50 testing tools available in market today. These include both paid and open source tools. Moreover, some tools are purpose specific e.g. UI testing, Functional Testing, DB Testing, Load Testing, Performance, Security Testing and Link validation testing etc. However, some tools are strong and provide the facility of testing several major aspects of an application. The general concept of ‘Application Testing’ is its functional testing. So, our focus will be on functional testing tools.
Here is the list of some most important and fundamental features that are provided by almost all of the ‘Functional Testing’ tools.
a. Record and Play
b. Parametrize the Values
c. Script Editor
d. Run (the test or script, with debug and update modes)
c. Report of Run session
Different vendors provide some specific features that make their product unique to other competitor products. But the five features listed above are the most common and can be found in almost all the functional testing tools.
Following is the list of few widely used Functional Testing tools.
1) HP QTP (Quick Test Professional)
2) Selenium
3) IBM Rational Robot
4) Test Complete
5) Push to Test
6) Telerik
Software Test Plan (STP)
For any activity, some planning is always required and same is true for software testing. Without proper plan there is always high risk of getting distracted during the testing. If this risk becomes a fact, the results could be horrible.
Following are the 5 main parts of a good Test Plan:
a. Scope
i. Overview of AUT
ii. Features (or areas) to be tested
iii. Exclusions (features or areas not to be tested) with reason
iv. Dependencies (of testing activities on each other, if any)
b. Objectives: This section describes the goals of testing activity e.g. validation of bug fixes, new features added or revamp of AUT etc.
c. Focus: This section describes what aspect of application will be included in the testing e.g. security, functionality, usability, reliability, performance or efficiency etc.
d. Approach: This section describes what testing methodology will be adopted for which areas of AUT. For example, in the STP of an ERP application; the approach section may contain the information that black box testing will be approach for payroll. On the other hand, for reports the approach will be grey box testing.
e. Schedule: This sections describes that who will be doing what and where on the AUT, when and how. Schedule section is, in fact, a ’4Ws and H’ of the STP. Normally it is a simple table, but every organization may have its own customized format according to their own needs. Once the test plan is ready and application is under development; testers design and document the test cases. In the “Application Testing – Methodologies” section above, I have listed the TC design techniques.
Application Testing Cycles
Once the AUT is ready for testing, the practical phase of testing cycle starts in which testers actually execute the test cases on AUT. Keep in mind that here the testing cycle is discussed regardless of Testing Levels (Unit, Module, Integration, System and User Acceptance) and Testing Environments (Dev, QA, Client’s Replica, Live).
a. Smoke Testing: The very first testing cycle that is wide and shallow in approach. The purpose of smoke testing is to verify that there are no crashes in the application and it is suitable for further testing.
b. Sanity Testing: The second testing cycle that is narrow and deep in its approach. Its purpose is to verify that a specific module is working properly and is suitable for complete testing.
Tip: Usually there is not ample amount of time available to run these two cycles separately. So, a mixture of both these cycles is adopted in practical.
c. Functional Testing: The proper and full fledged testing of application is performed in this cycle. The primary focus of this activity is to verify the business logic of the application.
d. Regression Testing: This is the final cycle of testing in which the bug-fixes and/or updates are verified. Moreover, regression testing also ensures that there is no malfunctioning in other areas of AUT due to fixes and changes.
Bugs are logged in every testing cycle. There is no distinct border line between the testing cycles. For example, in Regression the Functionality is also verified and it may also require smoke, sanity or their merger first.
Application Testing – Best Practices
I think, hundreds of articles are available about this on internet. Every article suggests different number of best practices ranging from 7 to 30 (that I have seen so far). However, I have just 5 tips for readers.
Plan Properly
Test Keenly
Log the bugs Clearly
Do Regression Test Efficiently
Improve above four skills Continuously
Conclusion: Application Testing is a vast subject and the primary activity of any software tester. In this article, I have provided the overview of some most fundamental and necessary areas that fall under this topic. Application Testing involves strategies, phenomena, approaches, tools, technologies and guidelines. However, I have addressed the conceptual and practical insight of its salient concerns.

Dec 13, 2011

Game Testing!!!! Steping stone in hte world market...!!

Want to Get Paid to Play Games? Become a Game Tester!

Game Testing Industry Introduction:
The video game testing industry is set to become the largest industry. In spite of the recession, there was no dearth in the sales of the game titles, although the game console sales were hit and the game testing companies had to revamp their strategies.
Gaming had its ups and downs over the years but it continues to grow leaps and bounds. Facebook application games are really path breaking with budding developers experimenting their knowledge. Episodic games are the new thing. Games for the iPhone are the new frontier.
So, no one in the game industry knows where games will be even two or three years from now. The only thing they know is that everything is changing and that the games that are released in a few years will be different from what we have now.
Following Jobs are made available by Gaming Industry:
  • Video game programming Jobs (designing video games)
  • Video game testing jobs
Designing Video games requires skilled and experienced video game designers. Testing video games is equally challenging as game tester needs to have a solid writing skill, very good communication skill and habit to keep attention for details.
Video game testers play critical role in game development industry. As video game programmers spend years deigning video games and video game tester needs to make sure it’s ready for release in very short time span.

What is a typical Game Testing Process?

Computer games take from one to three years to develop (depending on scale). Testing begins late in the development process, sometimes from halfway to 75% into development (it starts so late because, until then, there is little to play or test).
Once the testers get a version, they begin playing the game. Testers must carefully note any errors they uncover. These may range from bugs to art issues to logic errors. Some bugs are easy to document but many are hard to describe and may take several steps to describe so a developer can replicate or find the bug. On a large-scale game with numerous testers, a tester must first determine whether the bug has already been reported before they can log the bugs themselves. Once a bug has been reported as fixed, the tester has to go back and verify that the fix works – and occasionally return to verify that is has not reappeared.

Game Testing Strategy:

Evaluation of game rules:
Game rules adequately explain operation of all components of the game including features, free games etc. Game functions as defined by rules.
UI, Functional, Performance and Compatibility test:
Verify Games outcome and data are correctly shown when games are played. Verify Game Functionality such as Game Progress, game outcomes, handling of incomplete and re-started games, multi player games.
Verification the Integration points:
Check if game win determination aligns with game rules.
Reviewing gaming procedures:
Procedures will be reviewed by System management, player account management, tournaments and promotions.
Infrastructure and security review:
Require to verify all equipment and network implementation. Secure and reliable operation for example time synchronization, OS reliability and security.

How to Test Games?

This process is almost similar to product or web application testing. Here is the typical game testing process:
Identification: First analyze and identify the game rules and behavior
Functional Testing: Ensure game works as intended. This also includes integration testing with third party tools used if any.
OS and Browser compatibility: Most critical game testing part is to ensure game works on required Operating systems. For online games check functionality on all intended browsers.
Performance testing: This becomes critical for online games if gaming site handles betting on game. Game testers must verify if Game Testing site smoothly handles customer load.
Multi player Testing: For multi player games you need to verify the game functionality to handle all players and functionality with fair distribution of game resources to all players.
Reporting: Bug reporting to developers. Bug evidence need to produced and submitted through bug reporting system.
Analysis: Developers hold the responsibility to fix the bugs.

Verification
: After the fix, bug need to be verified by the testers to confirm that it shouldn’t reappear.

Game Testing Tips:

1) Understand Random Number Generator evaluation (RNG): This is very important to add unpredictability in game. In most games this RNG system is used to map game outcomes.
2) First identify the “game algorithm” from Source code to identify issues in game application.
3) Verify the source code for appropriate use of random numbers and error handling. (Only if you know the source code)
4) Validate and evaluate the game predefined rules.
5) Verify consistency of game rules.
6) Make sure offensive content or material is never displayed.
7) Regularly Check Game history and system event logs.
8 ) Make sure Games outcome are displayed for a reasonable time.
9) Irrespective of Single/Multi player games we need to validate bandwidth and client software.
10) Verify Minimum/maximum limits of bets, deposits and other critical game symbols.
11) Verify correct game and system operation after game fail over and recovery.
12) Always verify all reports for data accuracy. Verify reports for date, time, number of wins, money etc.
13) Test System requirements. This is very important in game testing. Verify all the infrastructure and security requirements, Game equipments, network and game synchronization with OS.
14) Make sure sufficient information is always available to users to protect game players.

Game Testing Jobs:

Gaming field is getting much better day by day and Game Career as a Game designer or tester is very bright. There are many game testing professionals making decent amount of money as a video game testers, working from home. Present Internet generation bringing massive innovations and scope to grow. IT and Non-IT people are willing to spend their free time to play online and video games. “Game testing from home” is now a new trend to earn money. We can clearly see that it’s getting into our daily activities.
If anyone of you is trying hard getting into gaming industry then you need to have interest and passion that drives you to success. Due to addition of vast and complex new games, Game QA is no longer less technical than general software QA. Game testing was widely considered as a “stepping stone” position but now it’s becoming full-time job opportunities for experienced testers.
If you have passion for games and good understanding of testing methodologies, becoming a successful game tester is not difficult for you!
 
If you are in Game Testing Industry, your valuable inputs will help our readers to know more about Game Testing. So please share your thoughts and tips on Game Testing in comments below.

Top Ten Tips guidelines for bug tracking!!!

1.Remember that the only person who can close a bug is the person who findout first.Anyone can resolve it,but only the person who saw the bug can really be sure that when they saw is fixed.

2.A good tester will always try to reduce the reproduction steps to the minimal steps to reproduce.this is extremely helpful for the programmer who has to find the bug.

3.There are many ways to resolve a bug. Developer can resolve a bug as fixed,wont fix,postponed,not pro,duplicate or by design

4.You will want to keep careful track of versions. Every build of the sofware that you give to testers should have a build ID Number so that the poor tester doesnt have to retest the bug on a version of the software where it wasnot even supposed to be fixed.

5.Not reproduce means that nobody could ever reproduce the bug. Programmers often use this when the bug report is missing the reproduce steps.

6.If yoy are a programmer and you are having trouble getting testers to use the bug database, just dont accept bug reports by any other method. If your testers are used to sending you email with bug reports, just bounce the emails back to them with a brief message: “ please put this in the bug database. I cant keep track of emails.

7.If you are a tester, and you are having trouble getting programmers to use the bug database, just dont tell them about bugs – put them in the database and let the database email them.

8.If your are a programmer, and only some of your colleagues use the bug database, just start assigning them bugs in the database. Eventually they will get the hint.

9.If you are a manager, and nobody seems to be using the bug database that you installed at great expense, start assigning new features to people using bug. A bug database is alos a great “unimplemented feature” database, too.

10.Avoid the temptation to add new fields to the bug database.Every month or so, somebody will come up with a great idea for a new field to put in the database. You get all kinds of clever ideas, for example, keeping track of file where the bug was found; keeping track of what % of the time the bug is reproducible; keeping track of how many times the bug occured ; keeping track of which exact versions of which DLLs were installed on the machine where the bug happend. Its very important not to give in to these ideas. If you do , your new bug entry screen will end up with a thousand fields that you need to supply, and nobody will want to input bug reports any more. For the bug database to work, everybody needs to use it, and if entering bugs “formally” is too mush work, people will go around the bug database

Definations

Acceptance Testing: Testing conducted to enable a user/customer to determine whether to accept a software product. Normally performed to validate the software meets a set of agreed acceptance criteria.
Accessibility Testing: Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.).
Ad Hoc Testing: A testing phase where the tester tries to 'break' the system by randomly trying the system's functionality. Can include negative testing as well. See also Monkey Testing.
Agile Testing: Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm. See also Test Driven Development.
Application Binary Interface (ABI): A specification defining requirements for portability of applications in binary forms across defferent system platforms and environments.
Application Programming Interface (API): A formalized set of software calls and routines that can be referenced by an application program in order to access supporting system or network services.
Automated Software Quality (ASQ): The use of software tools, such as automated testing tools, to improve software quality.
Automated Testing:
  • Testing employing software tools which execute tests without manual intervention. Can be applied in GUI, performance, API, etc. testing.
  • The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.
B
Backus-Naur Form: A metalanguage used to formally describe the syntax of a language.
Basic Block: A sequence of one or more consecutive, executable statements containing no branches.
Basis Path Testing: A white box test case design technique that uses the algorithmic flow of the program to design tests.
Basis Set: The set of tests derived using basis path testing.
Baseline: The point at which some deliverable produced during the software engineering process is put under formal change control.
Benchmark Testing: Tests that use representative sets of programs and data designed to evaluate the performance of computer hardware and software in a given configuration.
Beta Testing: Testing of a rerelease of a software product conducted by customers.
Binary Portability Testing: Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.
Black Box Testing: Testing based on an analysis of the specification of a piece of software without reference to its internal workings. The goal is to test how well the component conforms to the published requirements for the component.
Bottom Up Testing: An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
Boundary Testing: Test which focus on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).
Boundary Value Analysis: In boundary value analysis, test cases are generated using the extremes of the input domaini, e.g. maximum, minimum, just inside/outside boundaries, typical values, and error values. BVA is similar to Equivalence Partitioning but focuses on "corner cases".
Branch Testing: Testing in which all branches in the program source code are tested at least once.
Breadth Testing: A test suite that exercises the full functionality of a product but does not test features in detail.
Bug: A fault in a program which causes the program to perform in an unintended or unanticipated manner.
C
CAST: Computer Aided Software Testing.
Capture/Replay Tool: A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time. Most commonly applied to GUI test tools.
CMM: The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes.
Cause Effect Graph: A graphical representation of inputs and the associated outputs effects which can be used to design test cases.
Code Complete: Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.
Code Coverage: An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.
Code Inspection: A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.
Code Walkthrough: A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.
Coding: The generation of source code.
Compatibility Testing: Testing whether software is compatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware.
Component: A minimal software item for which a separate specification is available.
Component Testing: See Unit Testing.
Concurrency Testing: Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.
Conformance Testing: The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.
Context Driven Testing: The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.
Conversion Testing: Testing of programs or procedures used to convert data from existing systems for use in replacement systems.
Cyclomatic Complexity: A measure of the logical complexity of an algorithm, used in white-box testing.
D
Data Dictionary: A database that contains definitions of all data items defined during analysis.
Data Flow Diagram: A modeling notation that represents a functional decomposition of a system.
Data Driven Testing: Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.
Debugging: The process of finding and removing the causes of software failures.
Defect: Nonconformance to requirements or functional / program specification
Dependency Testing: Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.
Depth Testing: A test that exercises a feature of a product in full detail.
Dynamic Testing: Testing software through executing it. See also Static Testing.
E
Emulator: A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system.
Endurance Testing: Checks for memory leaks or other problems that may occur with prolonged execution.
End-to-End testing: Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
Equivalence Class: A portion of a component's input or output domains for which the component's behaviour is assumed to be the same from the component's specification.
Equivalence Partitioning: A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.
Error: A mistake in the system under test; usually but not always a coding mistake on the part of the developer.
Exhaustive Testing: Testing which covers all combinations of input values and preconditions for an element of the software under test.
F
Functional Decomposition: A technique used during planning, analysis and design; creates a functional hierarchy for the software.
Functional Specification: A document that describes in detail the characteristics of the product with regard to its intended features.
Functional Testing: See also Black Box Testing.
  • Testing the features and operational behavior of a product to ensure they correspond to its specifications.
  • Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions.
G
Glass Box Testing: A synonym for White Box Testing.
Gorilla Testing: Testing one particular module, functionality heavily.
Gray Box Testing: A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings.
H
High Order Tests: Black-box tests conducted once the software has been integrated.
I
Independent Test Group (ITG): A group of people whose primary responsibility is software testing.
Inspection: A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection).
Integration Testing: Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems.
Installation Testing: Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
J
K
L
Load Testing: See Performance Testing.
Localization Testing: This term refers to making software specifically designed for a specific locality.
Loop Testing: A white box testing technique that exercises program loops.
M
Metric: A standard of measurement. Software metrics are the statistics describing the structure or content of a program. A metric should be a real objective measurement of something such as number of bugs per lines of code.
Monkey Testing: Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.
Mutation Testing: Testing done on the application where bugs are purposely added to it.
N
Negative Testing: Testing aimed at showing software does not work. Also known as "test to fail". See also Positive Testing.
N+1 Testing: A variation of Regression Testing. Testing conducted with multiple cycles in which errors found in test cycle N are resolved and the solution is retested in test cycle N+1. The cycles are typically repeated until the solution reaches a steady state and there are no errors. See also Regression Testing.
O
P
Path Testing: Testing in which all paths in the program source code are tested at least once.
Performance Testing: Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing".
Positive Testing: Testing aimed at showing software works. Also known as "test to pass". See also Negative Testing.
Q
Quality Assurance: All those planned or systematic actions necessary to provide adequate confidence that a product or service is of the type and quality needed and expected by the customer.
Quality Audit: A systematic and independent examination to determine whether quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives.
Quality Circle: A group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the quality of outputs of a process and to the correction of problems or to the improvement of quality.
Quality Control: The operational techniques and the activities used to fulfill and verify requirements of quality.
Quality Management: That aspect of the overall management function that determines and implements the quality policy.
Quality Policy: The overall intentions and direction of an organization as regards quality as formally expressed by top management.
Quality System: The organizational structure, responsibilities, procedures, processes, and resources for implementing quality management.
R
Race Condition: A cause of concurrency problems. Multiple accesses to a shared resource, at least one of which is a write, with no mechanism used by either to moderate simultaneous access.
Ramp Testing: Continuously raising an input signal until the system breaks down.
Recovery Testing: Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
Regression Testing: Retesting a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.
Release Candidate: A pre-release version, which contains the desired functionality of the final version, but which needs to be tested for bugs (which ideally should be removed before the final version is released).
S
Sanity Testing: Brief test of major functional elements of a piece of software to determine if its basically operational. See also Smoke Testing.
Scalability Testing: Performance testing focused on ensuring the application under test gracefully handles increases in work load.
Security Testing: Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.
Smoke Testing: A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.
Soak Testing: Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.
Software Requirements Specification: A deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software/
Software Testing: A set of activities conducted with the intent of finding errors in software.
Static Analysis: Analysis of a program carried out without executing the program.
Static Analyzer: A tool that carries out static analysis.
Static Testing: Analysis of a program carried out without executing the program.
Storage Testing: Testing that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. This is external storage as opposed to internal storage.
Stress Testing: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Often this is performance testing using a very high level of simulated load.
Structural Testing: Testing based on an analysis of internal workings and structure of a piece of software. See also White Box Testing.
System Testing: Testing that attempts to discover defects that are properties of the entire system rather than of its individual components.
T
Testability: The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met.
Testing:
  • The process of exercising software to verify that it satisfies specified requirements and to detect errors.
  • The process of analyzing a software item to detect the differences between existing and required conditions (that is, bugs), and to evaluate the features of the software item (Ref. IEEE Std 829).
  • The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component.
Test Automation: See Automated Testing.
Test Bed: An execution environment configured for testing. May consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, etc. The Test Plan for a project should enumerated the test beds(s) to be used.
Test Case:
  • Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A Test Case will consist of information such as requirements testing, test steps, verification steps, prerequisites, outputs, test environment, etc.
  • A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.
Test Driven Development: Testing methodology associated with Agile Programming in which every chunk of code is covered by unit tests, which must all pass all the time, in an effort to eliminate unit-level and regression bugs during development. Practitioners of TDD write a lot of tests, i.e. an equal number of lines of test code to the size of the production code.
Test Driver: A program or test tool used to execute a tests. Also known as a Test Harness.
Test Environment: The hardware and software environment in which tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers.
Test First Design: Test-first design is one of the mandatory practices of Extreme Programming (XP).It requires that programmers do not write any production code until they have first written a unit test.
Test Harness: A program or test tool used to execute a tests. Also known as a Test Driver.
Test Plan: A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning. Ref IEEE Std 829.
Test Procedure: A document providing detailed instructions for the execution of one or more test cases.
Test Scenario: Definition of a set of test cases or test scripts and the sequence in which they are to be executed.
Test Script: Commonly used to refer to the instructions for a particular test that will be carried out by an automated test tool.
Test Specification: A document specifying the test approach for a software feature or combination or features and the inputs, predicted results and execution conditions for the associated tests.
Test Suite: A collection of tests used to validate the behavior of a product. The scope of a Test Suite varies from organization to organization. There may be several Test Suites for a particular product for example. In most cases however a Test Suite is a high level concept, grouping together hundreds or thousands of tests related by what they are intended to test.
Test Tools: Computer programs used in the testing of a system, a component of the system, or its documentation.
Thread Testing: A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels.
Top Down Testing: An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.
Total Quality Management: A company commitment to develop a process that achieves high quality product and customer satisfaction.
Traceability Matrix: A document showing the relationship between Test Requirements and Test Cases.
U
Usability Testing: Testing the ease with which users can learn and use a product.
Use Case: The specification of tests that are conducted from the end-user perspective. Use cases tend to focus on operating software as an end-user would conduct their day-to-day activities.
User Acceptance Testing: A formal product evaluation performed by a customer as a condition of purchase.
Unit Testing: Testing of individual software components.
V
Validation: The process of evaluating software at the end of the software development process to ensure compliance with software requirements. The techniques for validation is testing, inspection and reviewing.
Verification: The process of determining whether of not the products of a given phase of the software development cycle meet the implementation steps and can be traced to the incoming objectives established during the previous phase. The techniques for verification are testing, inspection and reviewing.
Volume Testing: Testing which confirms that any values that may become large over time (such as accumulated counts, logs, and data files), can be accommodated by the program and will not cause the program to stop working or degrade its operation in any manner.
W
Walkthrough: A review of requirements, designs or code characterized by the author of the material under review guiding the progression of the review.
White Box Testing: Testing based on an analysis of internal workings and structure of a piece of software. Includes techniques such as Branch Testing and Path Testing. Also known as Structural Testing and Glass Box Testing. Contrast with Black Box Testing.
Workflow Testing: Scripted end-to-end testing which duplicates specific workflows which are expected to be utilized by the end-user.