Aug 20, 2012
Task- Based Software Testing
Once in a Software Testing conference held India, the topic
of discussion was "How to test software’s impact on a system’s mission
effectiveness?"
Mostly customers want systems that are:
- On-time
- Within budget
- That satisfy user requirements
- Reliable
Latter two concerns (out of above four) can be refined into two broad objectives for operational testing:
1. That a system’s performance satisfies its requirements as specified in the Operational Requirement Document and related documents.
2. To identify any serious deficiencies in the system design that need correction before full rate production.
Following the path from the system level to software two generic reasons for testing software are:
- Test for defects so they can be fixed - Debug Testing
- Test for confidence in the software - Operational Testing
Debug testing is usually conducted using a combination of functional test techniques and structural test techniques. The goal is to locate defects in the most cost-effective manner and correct the defects, ensuring the performance satisfies the user requirements.
Operational testing is based on the expected usage profile for a system. The goal is to estimate the confidence in a system, ensuring the system is reliable for its intended use.
Task-Based Testing is a variation on operational testing. The particular techniques are not new, rather it leverages commonly accepted techniques by placing them within the context of current operational and acquisition strategies.
Task-based testing, as the name implies, uses task analysis. This begins with a comprehensive framework for all of the tasks that the system will perform. Through a series of hierarchical task analyses, each unit within the service creates a Mission Essential Task List (Mission of System).
These lists only describe "what" needs to be done, not "how" or "who." Further task decomposition identifies the system and people required to carry out a mission essential task. Another level of decomposition results in the system tasks (i.e. functions) a system must provide. This is, naturally, the level in which developers and testers are most interested. From a tester’s perspective, this framework identifies the most important functions to test by correlating functions against the mission essential tasks a system is designed to support.
This is distinctly different from the typical functional testing or "test-to-spec" approach where each function or specification carries equal importance. Ideally, there should be no function or specification which does not contribute to a task, but in reality there are often requirements, specifications, and capabilities which do not or minimally support a mission essential task. Using task analysis, one identifies those functions impacting the successful completion of mission essential tasks and highlights them for testing.
Operational Profiles: The process of task analysis has great benefit in identifying what functions are the most important to test. However, the task analysis only identifies the mission essential tasks and functions, not their frequency of use. Greater utility can be gained by combining the mission essential tasks with an operational profile an estimate of the relative frequency of inputs that represent field use. This has several benefits:
1. Offers a basis for reliability assessment, so that the developer can have not only the assurance of having tried to improve the software, but also has an estimate of the reliability actually achieved.
2. Provides a common base for communicating with the developers about the intended use of the system and how it will be evaluated.
3. When software testing schedules and budgets are tightly constrained, this design yields the highest practical reliability because if failures are seen they would be the high frequency failures.
The first benefit has the advantage of applying statistical techniques in:
- The design of tests
- The analysis of resulting data
Software reliability estimation methods such as Task Analysis are available to estimate both the expected field reliability and the rate of growth in reliability. This directly supports an answer to the question about software’s impact on a system’s mission effectiveness.
Operational profiles are criticized as being difficult to develop. However, as part of its current operations and acquisition strategy, some organizations inherently develops an operational profile. At higher levels, this is reflected in the following documents:
- Analysis of Alternatives
- Operational Requirements Document (ORD)
- Operations Plans
- Concept of Operations (CONOPS) etc.
Closer to the tester’s realm is the interaction between the user and the developer which the current acquisition strategy encourages. The tester can act as a facilitator in helping the user refine his or her needs while providing insight to the developer on expected use. This highlights the second benefit above the communication between the user, developer, and tester.
Despite years of improvement in the software development process, one still sees systems which have gone through intensive debug testing (statement coverage, branch coverage, etc.) and "test-to-spec," but still fail to satisfy the customer’s concerns (that I mentioned above). By involving a customer early in the process to develop an operational profile, the most needed functions to support a task will be developed and tested first, increasing the likelihood of satisfying the customer’s four concerns. This third benefit is certainly of interest in today’s environment of shrinking budgets and manpower, shorter schedules (spiral acquisition), and greater demands on a system.
Task-Based Software Testing
Thus, Task-based software testing is the combination of a task analysis and an operational profile. The task analysis helps partition the input domain into mission essential tasks and the system functions which support them. Operational profiles, based on these tasks, are developed to further focus the testing effort.
Debug Testing
Debug testing is directed at finding as many bugs as possible, by either sampling all situations likely to produce failures using methods like code coverage & specification criteria etc, or concentrating on those that are considered most likely to produce failures like stress testing and boundary testing methods.
Survey of unit testing methods are examples of debug testing methods. These include such techniques as statement testing, branch testing, basis path testing, etc. Typically associated with these methods are some criteria based on coverage, thus they are sometimes referred to as coverage methods. Debug testing is based on a tester’s hypothesis of the likely types and locations of bugs.
Consequently, the effectiveness of this method depends heavily on whether the tester’s assumptions are correct.
If a developer and/or tester has a process in place to correctly identify the potential types and locations of bugs, then debug testing may be very effective at finding bugs. If a "standard" or "blind" approach is used, such as statement testing for its own sake, the testing effort may be ineffectual and wasted. A subtle hazard of debug testing is that it may uncover many failures, but in the process wastes test and repair effort without notably improving the software because the failures occur at a negligible rate during field use.
Integration of Test Methods
Historically, a system’s developer relied on debug testing (which includes functional or "test-to-spec" testing). Testing with the perspective of how the system would by employed was not seen until an operational test agency (OTA) became involved. Even on the occasions when developmental test took on an operational flavor, this is viewed as too late in the process. This historical approach to testing amplifies the weaknesses of both operational and debug testing. I propose that task-based software testing be accelerated to a much earlier point in the acquisition process. This has the potential of countering each respective method’s weaknesses with the other’s strengths.
Conclusion: Task-based Software Testing evaluation is a combination of demonstrated, existing methods (task analysis and operational testing). Its strength lies in matching well with the current operational strategy of mission essential tasks and the acquisition community’s goal to deliver operational capability quickly. By integrating task-based software testing with existing debug testing, the risk of meeting the customer’s four concerns (on-time, within budget, satisfies requirements, and is reliable) can be reduced.
Mostly customers want systems that are:
- On-time
- Within budget
- That satisfy user requirements
- Reliable
Latter two concerns (out of above four) can be refined into two broad objectives for operational testing:
1. That a system’s performance satisfies its requirements as specified in the Operational Requirement Document and related documents.
2. To identify any serious deficiencies in the system design that need correction before full rate production.
Following the path from the system level to software two generic reasons for testing software are:
- Test for defects so they can be fixed - Debug Testing
- Test for confidence in the software - Operational Testing
Debug testing is usually conducted using a combination of functional test techniques and structural test techniques. The goal is to locate defects in the most cost-effective manner and correct the defects, ensuring the performance satisfies the user requirements.
Operational testing is based on the expected usage profile for a system. The goal is to estimate the confidence in a system, ensuring the system is reliable for its intended use.
Task-Based Testing is a variation on operational testing. The particular techniques are not new, rather it leverages commonly accepted techniques by placing them within the context of current operational and acquisition strategies.
Task-based testing, as the name implies, uses task analysis. This begins with a comprehensive framework for all of the tasks that the system will perform. Through a series of hierarchical task analyses, each unit within the service creates a Mission Essential Task List (Mission of System).
These lists only describe "what" needs to be done, not "how" or "who." Further task decomposition identifies the system and people required to carry out a mission essential task. Another level of decomposition results in the system tasks (i.e. functions) a system must provide. This is, naturally, the level in which developers and testers are most interested. From a tester’s perspective, this framework identifies the most important functions to test by correlating functions against the mission essential tasks a system is designed to support.
This is distinctly different from the typical functional testing or "test-to-spec" approach where each function or specification carries equal importance. Ideally, there should be no function or specification which does not contribute to a task, but in reality there are often requirements, specifications, and capabilities which do not or minimally support a mission essential task. Using task analysis, one identifies those functions impacting the successful completion of mission essential tasks and highlights them for testing.
Operational Profiles: The process of task analysis has great benefit in identifying what functions are the most important to test. However, the task analysis only identifies the mission essential tasks and functions, not their frequency of use. Greater utility can be gained by combining the mission essential tasks with an operational profile an estimate of the relative frequency of inputs that represent field use. This has several benefits:
1. Offers a basis for reliability assessment, so that the developer can have not only the assurance of having tried to improve the software, but also has an estimate of the reliability actually achieved.
2. Provides a common base for communicating with the developers about the intended use of the system and how it will be evaluated.
3. When software testing schedules and budgets are tightly constrained, this design yields the highest practical reliability because if failures are seen they would be the high frequency failures.
The first benefit has the advantage of applying statistical techniques in:
- The design of tests
- The analysis of resulting data
Software reliability estimation methods such as Task Analysis are available to estimate both the expected field reliability and the rate of growth in reliability. This directly supports an answer to the question about software’s impact on a system’s mission effectiveness.
Operational profiles are criticized as being difficult to develop. However, as part of its current operations and acquisition strategy, some organizations inherently develops an operational profile. At higher levels, this is reflected in the following documents:
- Analysis of Alternatives
- Operational Requirements Document (ORD)
- Operations Plans
- Concept of Operations (CONOPS) etc.
Closer to the tester’s realm is the interaction between the user and the developer which the current acquisition strategy encourages. The tester can act as a facilitator in helping the user refine his or her needs while providing insight to the developer on expected use. This highlights the second benefit above the communication between the user, developer, and tester.
Despite years of improvement in the software development process, one still sees systems which have gone through intensive debug testing (statement coverage, branch coverage, etc.) and "test-to-spec," but still fail to satisfy the customer’s concerns (that I mentioned above). By involving a customer early in the process to develop an operational profile, the most needed functions to support a task will be developed and tested first, increasing the likelihood of satisfying the customer’s four concerns. This third benefit is certainly of interest in today’s environment of shrinking budgets and manpower, shorter schedules (spiral acquisition), and greater demands on a system.
Task-Based Software Testing
Thus, Task-based software testing is the combination of a task analysis and an operational profile. The task analysis helps partition the input domain into mission essential tasks and the system functions which support them. Operational profiles, based on these tasks, are developed to further focus the testing effort.
Debug Testing
Debug testing is directed at finding as many bugs as possible, by either sampling all situations likely to produce failures using methods like code coverage & specification criteria etc, or concentrating on those that are considered most likely to produce failures like stress testing and boundary testing methods.
Survey of unit testing methods are examples of debug testing methods. These include such techniques as statement testing, branch testing, basis path testing, etc. Typically associated with these methods are some criteria based on coverage, thus they are sometimes referred to as coverage methods. Debug testing is based on a tester’s hypothesis of the likely types and locations of bugs.
Consequently, the effectiveness of this method depends heavily on whether the tester’s assumptions are correct.
If a developer and/or tester has a process in place to correctly identify the potential types and locations of bugs, then debug testing may be very effective at finding bugs. If a "standard" or "blind" approach is used, such as statement testing for its own sake, the testing effort may be ineffectual and wasted. A subtle hazard of debug testing is that it may uncover many failures, but in the process wastes test and repair effort without notably improving the software because the failures occur at a negligible rate during field use.
Integration of Test Methods
Historically, a system’s developer relied on debug testing (which includes functional or "test-to-spec" testing). Testing with the perspective of how the system would by employed was not seen until an operational test agency (OTA) became involved. Even on the occasions when developmental test took on an operational flavor, this is viewed as too late in the process. This historical approach to testing amplifies the weaknesses of both operational and debug testing. I propose that task-based software testing be accelerated to a much earlier point in the acquisition process. This has the potential of countering each respective method’s weaknesses with the other’s strengths.
Conclusion: Task-based Software Testing evaluation is a combination of demonstrated, existing methods (task analysis and operational testing). Its strength lies in matching well with the current operational strategy of mission essential tasks and the acquisition community’s goal to deliver operational capability quickly. By integrating task-based software testing with existing debug testing, the risk of meeting the customer’s four concerns (on-time, within budget, satisfies requirements, and is reliable) can be reduced.
Aug 16, 2012
Basics of Quality Assurance (QA)
Quality Assurance is the most important factor in any business or industry.
Same thing is applicable for Software development also.
Spending some additional money for getting high quality product will definitely give more profit.
But anyway, it is not true that expensive products are high-quality products. Even inexpensive product can be high-quality product if it meets Customer’s needs/expectation.
The quality assurance cycle consists of four steps: Plan, Do, Check, and Act. These steps are commonly abbreviated as PDCA.
The four quality assurance steps within the PDCA model are
For getting appropriate quality output in software development we need to follow SQA (Software Quality Assurance) process in each phase (Planning, Requirement Analysis, Design, Development, Integration & Test, Implementation and Maintenance) of the software development lifecycle.
We should follow below solutions to avoid many software development problems.
Same thing is applicable for Software development also.
Spending some additional money for getting high quality product will definitely give more profit.
But anyway, it is not true that expensive products are high-quality products. Even inexpensive product can be high-quality product if it meets Customer’s needs/expectation.
The quality assurance cycle consists of four steps: Plan, Do, Check, and Act. These steps are commonly abbreviated as PDCA.
The four quality assurance steps within the PDCA model are
- Plan: Establish objectives and processes required to deliver the desired results.
- Do: Implement the process developed.
- Check: Monitor and evaluate the implemented process by testing the results against the predetermined objectives
- Act: Apply actions necessary for improvement if the results require changes.
For getting appropriate quality output in software development we need to follow SQA (Software Quality Assurance) process in each phase (Planning, Requirement Analysis, Design, Development, Integration & Test, Implementation and Maintenance) of the software development lifecycle.
We should follow below solutions to avoid many software development problems.
- Solid requirements - Clear, complete, attainable, detailed and testable requirements that are agreed by all players (Customer,developers and Testers).
- Realistic schedules - Allocate enough time for planning, design,testing, bug fixing, re-testing and documentation.
- Adequate testing - Start testing early, re-test after fixes/changes.
- Avoid unnecessary changes in initial requirements once after starting the coding
- Require walk-through and inspections
Aug 8, 2012
How to Build Quality Applications
Testing is an excellent means to build confidence in the quality of software before it’s deployed in a data center or released to customers. It’s good to have confidence before you turn an application loose on the users, but why wait until the end of the project? The most efficient form of quality assurance is building software the right way, right from the start. What can software testing, software quality, and software engineering professionals do, starting with the first day of the project, to deliver quality applications?
The first step in building a quality application is to know what you need to build. An amazingly large number of projects get underway without clarity amongst the project stakeholders about what the requirements are. According to Capers Jones’ studies, as many as 45% of defects are introduced in specifications. One working definition for quality is “fitness for use.” If we’re unclear on the intended uses, how can we build something that is fit for use? Not only do we need some specification of the requirements—whether formal or informal—but we should also conduct a thorough project stakeholder review of this specification to look for defects and to build consensus and understanding.
Another important early step is properly organizing the project. The overall approach to application development is the software development lifecycle model (SDLC). There are four main varieties of SDLC in common use today:
1.
Sequential (also called waterfall or V-model): In this approach, the team proceeds through a sequence of phases, starting with requirements, then design, then implementation, and then multiple levels of testing. This model works best when you can specify requirements that will change very little if at all over the course of the project. It also works best when you can plan the project with great accuracy, which typically means it’s similar to a project the team’s done before.
2.
Iterative (also called incremental or evolutionary): In this approach, the high-level requirements are grouped together into iterations (or increments), often based on technical risk, business importance, or both. The system is then designed, built, and tested group-by-group. This model works well if you need to deliver the most important features by a rigid deadline, but can accept some features
arriving later. This model can tolerate some change in the plan (often due to uncertainty or change in requirements) and still deliver the key features on time, which is not true of the sequential models.
3.
Agile (such as Scrum and XP): In Agile approaches, each iteration is compressed to a short as two weeks. Documentation is minimized and change is expected from one iteration to the next, and within each iteration. Various rules help prevent devolution into churn and chaos. This model works when applied with discipline, and its emphasis on accommodating change allows it to produce results even in rapidly-evolving situations.
4.
Code-and-fix: This approach is actually the absence of an approach. It involves starting the development of the application without any requirements, without a clear plan, without anything but a deadline, in many cases. This model can only work for the simplest, shortest, and least-risky of development projects.
Now, the first three of these models exhibit significant variation in practice. You should feel free to intelligently tailor the model to your specific needs, but beware of violating certain aspects of the model that enable other features of the model.
With the project properly organized and the requirements clearly understood (whether for the whole project or only for this iteration), design and coding can start. Of course, coding presents not only the opportunity to create great new features, but also the risk that the programmer will create great big bugs. To mitigate this risk, there are three things every programmer should do with every piece of code she writes:
1.
Unit testing: The programmer should test every line of code, every branch, every condition, and every loop. Higher levels of testing such as system test often touch half (or less) of the code, and any untested code is a potential hiding place for bugs. New tools, both commercial and freeware, make the job of unit testing much easier than it was in the past.
2.
Static analysis: Even code that passes unit tests can still contain latent defects, maintainability problems, and security vulnerabilities. Static analysis can cheaply and quickly find bugs that would take hours to find and remove during higher levels of testing. The programmer now has a wide variety of tools available to help with this task, too.
3.
Code review: Once a given unit of code is written, tested, and analyzed, having a walkthrough or technical review of the code among the programming team is a great way to catch most of the remaining bugs and to ensure good understanding of how the program works across the entire team. Studies at Motorola show that as few as three experienced programmers (including the author), following a rigorous inspection process, can find as many as 90% of remaining bugs.
We can be very confident indeed in each unit of code if programmers go through these three steps prior to checking their code into the source code repository.
Even with high quality units, there remains the risk of integration bugs. Integration bugs occur when two or more interoperating units don’t communicate, share data, or transfer control properly. To help mitigate integration risk, the project team can use continuous integration. This involves checking in code as it’s finished, compiling and building that code together, and running automated tests against the code to check for integration bugs. As with unit testing and static analysis, a variety of tools exist to help with this process now.
When we deliver quality applications—applications that are fit for use—we get to enjoy positive outcomes such as satisfied users and customers, improved reputation, more revenue or resources, and greater job satisfaction. In this article, we’ve seen that the pathway to delivering quality and enjoying those outcomes starts on the first day of the project and continues to the very end. Good requirements. Proper organization. Quality-focused programming. Continuous integration. And, once the application is ready, we can go through formal system, system integration, and user acceptance testing. If you’ve followed the steps outlined in this article, you’ll be amazed at how smoothly those tests go, and how quickly and confidently you can put a quality application into your data center.
Source:RBCS
The first step in building a quality application is to know what you need to build. An amazingly large number of projects get underway without clarity amongst the project stakeholders about what the requirements are. According to Capers Jones’ studies, as many as 45% of defects are introduced in specifications. One working definition for quality is “fitness for use.” If we’re unclear on the intended uses, how can we build something that is fit for use? Not only do we need some specification of the requirements—whether formal or informal—but we should also conduct a thorough project stakeholder review of this specification to look for defects and to build consensus and understanding.
Another important early step is properly organizing the project. The overall approach to application development is the software development lifecycle model (SDLC). There are four main varieties of SDLC in common use today:
1.
Sequential (also called waterfall or V-model): In this approach, the team proceeds through a sequence of phases, starting with requirements, then design, then implementation, and then multiple levels of testing. This model works best when you can specify requirements that will change very little if at all over the course of the project. It also works best when you can plan the project with great accuracy, which typically means it’s similar to a project the team’s done before.
2.
Iterative (also called incremental or evolutionary): In this approach, the high-level requirements are grouped together into iterations (or increments), often based on technical risk, business importance, or both. The system is then designed, built, and tested group-by-group. This model works well if you need to deliver the most important features by a rigid deadline, but can accept some features
arriving later. This model can tolerate some change in the plan (often due to uncertainty or change in requirements) and still deliver the key features on time, which is not true of the sequential models.
3.
Agile (such as Scrum and XP): In Agile approaches, each iteration is compressed to a short as two weeks. Documentation is minimized and change is expected from one iteration to the next, and within each iteration. Various rules help prevent devolution into churn and chaos. This model works when applied with discipline, and its emphasis on accommodating change allows it to produce results even in rapidly-evolving situations.
4.
Code-and-fix: This approach is actually the absence of an approach. It involves starting the development of the application without any requirements, without a clear plan, without anything but a deadline, in many cases. This model can only work for the simplest, shortest, and least-risky of development projects.
Now, the first three of these models exhibit significant variation in practice. You should feel free to intelligently tailor the model to your specific needs, but beware of violating certain aspects of the model that enable other features of the model.
With the project properly organized and the requirements clearly understood (whether for the whole project or only for this iteration), design and coding can start. Of course, coding presents not only the opportunity to create great new features, but also the risk that the programmer will create great big bugs. To mitigate this risk, there are three things every programmer should do with every piece of code she writes:
1.
Unit testing: The programmer should test every line of code, every branch, every condition, and every loop. Higher levels of testing such as system test often touch half (or less) of the code, and any untested code is a potential hiding place for bugs. New tools, both commercial and freeware, make the job of unit testing much easier than it was in the past.
2.
Static analysis: Even code that passes unit tests can still contain latent defects, maintainability problems, and security vulnerabilities. Static analysis can cheaply and quickly find bugs that would take hours to find and remove during higher levels of testing. The programmer now has a wide variety of tools available to help with this task, too.
3.
Code review: Once a given unit of code is written, tested, and analyzed, having a walkthrough or technical review of the code among the programming team is a great way to catch most of the remaining bugs and to ensure good understanding of how the program works across the entire team. Studies at Motorola show that as few as three experienced programmers (including the author), following a rigorous inspection process, can find as many as 90% of remaining bugs.
We can be very confident indeed in each unit of code if programmers go through these three steps prior to checking their code into the source code repository.
Even with high quality units, there remains the risk of integration bugs. Integration bugs occur when two or more interoperating units don’t communicate, share data, or transfer control properly. To help mitigate integration risk, the project team can use continuous integration. This involves checking in code as it’s finished, compiling and building that code together, and running automated tests against the code to check for integration bugs. As with unit testing and static analysis, a variety of tools exist to help with this process now.
When we deliver quality applications—applications that are fit for use—we get to enjoy positive outcomes such as satisfied users and customers, improved reputation, more revenue or resources, and greater job satisfaction. In this article, we’ve seen that the pathway to delivering quality and enjoying those outcomes starts on the first day of the project and continues to the very end. Good requirements. Proper organization. Quality-focused programming. Continuous integration. And, once the application is ready, we can go through formal system, system integration, and user acceptance testing. If you’ve followed the steps outlined in this article, you’ll be amazed at how smoothly those tests go, and how quickly and confidently you can put a quality application into your data center.
Source:RBCS
Subscribe to:
Posts (Atom)