Is it possible to test an application without a test
environment? To ‘test’ as early as possible in the application
lifecycle, before dynamic testing can even start? That’s something I’ve
been looking into the last few weeks. Collecting information about doing
as less as possible testing, but still getting the result needed. And
not with more or higher risks. This post is part of my research.
At this client the test team was asked to execute the testing activity without a testing environment. They were creating a new application for the client, but the client didn’t want to invest in a new testing environment. As an environment they only had a production environment to be available. But the application was high profile and needed to be tested thoroughly. One could argue to test on the production environment as it was a new application, but that was not possible. It couldn’t be set ‘open’ for a long period and the interfaces were also live and that could have become tricky. So they decided for another approach.
Physical availability of the business users
The team decided to pursue a full risk based approach. With a risk based approach they were going to conduct very thorough and extensive risk assessments to determine how to check the artifacts for defects.
So they went on and held risk analysis assessments with the client. That meant that they needed business users to be physically available throughout the risk assessments, something that for most test projects is very, very difficult to organize. But the client accepted, because the test manager could show that if these people were not available for these sessions the project (and product) risks would skyrocket as a result of a missing test environment. All business users were available as people in the delivery process, like hardware and security; all involved in the project. 50-60 people attended the risk assessments and were attended by the Client Sponsors, Client User team, Project Delivery teams, and Live Support staff.
Risk Analysis Workshop
The actual setup were workshops to determine the risks and requirements. These workshops were split up. First a workshop was held that tried to determine the clients’ needs; the requirements. All people involved together decided on what the software needed to do and how it should work, functional and non-functional.
Next were various breakout sessions. In little groups all people involved tried to create very detailed risks. The key message of the sessions was ‘what would hurt you the most’; in other words what pain would you feel when the application would fail, what would be the impact. In those sessions specific expertises were distributed over the groups. As a result the risks were of very high quality and very detailed; initially 110 risks were drawn up, later interviews added another 21. At the end all needed to set the priority of the impact.
After these breakout sessions a rotation was done to have another group look at the risks from one group and add their ideas to it, using different colors to show the variations. Any discrepancies were discussed at the end. The complete risk assessment considered factors such as number of affected users, system recoverability, external visibility of problem etc. All stakeholders left the workshops with the same view of risk distribution.
Likelihood determined separately
As said this session only determined the impact of the risk, likelihood wasn’t determined as that was the job of the IT team and delivery team. About two weeks after the first risk analysis the technical people decided, in the same type of sessions, what the likelihood of these risks would be. These people were asked because assessing the likelihood is about technical complexity, familiarity with technology, development staff capability. So business users were not involved, as they didn’t have the needed view.
For any needed detail interviews provided the missing information. Business users were interviewed later in the design process to get a lower level of granularity on the high exposure risks. This added another 21 risks to the playing field. Reminding the team to go through their risk analysis again.
![Risks1](https://lh3.googleusercontent.com/blogger_img_proxy/AEn0k_uemN5XwxckD_ZhCrR0nAfOGxoYBGvo5dJPnqYXh8uewgn0xN4Amppba5_U-_TtxD0NkWf0P8TcOqv4YYy7atLNmOQF0GjTlRd5kSrUW1N95_KokQn8egQFd9stlMhnntpqBF9g6G4tpw=s0-d)
Evaluations and quality gates
The technical team then decided on how they were going to test the artifacts that were connected with these risks. With artifacts they meant the documentation or prototypes of the application. Because the whole technical team decided on the measures that needed to be taken they were bought in, not the quality team decided on how to test, but the whole team. They needed to look at the risk, the artifact and how that could be checked. As a result documentation was held to evaluations (reviews and inspections) and certain quality gates were set.
Evaluations focused on the documentation quality:
The result was a shift in quality effort from most effort at the end (Testing and Acceptance) to one where quality measures started early in the application lifecycle.
![Risks2](https://lh3.googleusercontent.com/blogger_img_proxy/AEn0k_vij_-rYz3gzzQJyN_EdOUqzZNeEW0FraIW2fVhXpaMPVysXjOdxWP3suimO4BjVl0AMulDWYqnI-DbnA7w6SbYxsT6yNqNCYGc2hd1MKOY52D0bRR-iOC7Ki3HJWeAo4jhPi6Dvm5XPQ=s0-d)
Blue was the normal, traditional situation; most effort at the end. The new situation is green where most effort is early in the lifecycle.
Note: Why is Acceptance still more then Testing? Acceptance is an activity that must be done. Legally it’s needed and there for the effort is bigger compared to testing.
End of project
In the end the whole team tested the end-2-end chain of the application, the real business users were involved and the delivery team found 90% of the defects before any dynamic testing was done! Over 1500 defects were found during evaluations and code reviews. The delivery was successful! Go-live was on time and business users agreed sufficient testing was done. In the end no production incidents were encountered during the first 30 days after go-live.
What about the costs? Project financials were better than estimated; the work was delivered within the agreed budget.
So not only was the quality high (enough), costs were down and time to market was as planned. And the risks were down! Just by doing an extensive risks analysis.
At this client the test team was asked to execute the testing activity without a testing environment. They were creating a new application for the client, but the client didn’t want to invest in a new testing environment. As an environment they only had a production environment to be available. But the application was high profile and needed to be tested thoroughly. One could argue to test on the production environment as it was a new application, but that was not possible. It couldn’t be set ‘open’ for a long period and the interfaces were also live and that could have become tricky. So they decided for another approach.
Physical availability of the business users
The team decided to pursue a full risk based approach. With a risk based approach they were going to conduct very thorough and extensive risk assessments to determine how to check the artifacts for defects.
So they went on and held risk analysis assessments with the client. That meant that they needed business users to be physically available throughout the risk assessments, something that for most test projects is very, very difficult to organize. But the client accepted, because the test manager could show that if these people were not available for these sessions the project (and product) risks would skyrocket as a result of a missing test environment. All business users were available as people in the delivery process, like hardware and security; all involved in the project. 50-60 people attended the risk assessments and were attended by the Client Sponsors, Client User team, Project Delivery teams, and Live Support staff.
Risk Analysis Workshop
The actual setup were workshops to determine the risks and requirements. These workshops were split up. First a workshop was held that tried to determine the clients’ needs; the requirements. All people involved together decided on what the software needed to do and how it should work, functional and non-functional.
Next were various breakout sessions. In little groups all people involved tried to create very detailed risks. The key message of the sessions was ‘what would hurt you the most’; in other words what pain would you feel when the application would fail, what would be the impact. In those sessions specific expertises were distributed over the groups. As a result the risks were of very high quality and very detailed; initially 110 risks were drawn up, later interviews added another 21. At the end all needed to set the priority of the impact.
After these breakout sessions a rotation was done to have another group look at the risks from one group and add their ideas to it, using different colors to show the variations. Any discrepancies were discussed at the end. The complete risk assessment considered factors such as number of affected users, system recoverability, external visibility of problem etc. All stakeholders left the workshops with the same view of risk distribution.
Likelihood determined separately
As said this session only determined the impact of the risk, likelihood wasn’t determined as that was the job of the IT team and delivery team. About two weeks after the first risk analysis the technical people decided, in the same type of sessions, what the likelihood of these risks would be. These people were asked because assessing the likelihood is about technical complexity, familiarity with technology, development staff capability. So business users were not involved, as they didn’t have the needed view.
For any needed detail interviews provided the missing information. Business users were interviewed later in the design process to get a lower level of granularity on the high exposure risks. This added another 21 risks to the playing field. Reminding the team to go through their risk analysis again.
Evaluations and quality gates
The technical team then decided on how they were going to test the artifacts that were connected with these risks. With artifacts they meant the documentation or prototypes of the application. Because the whole technical team decided on the measures that needed to be taken they were bought in, not the quality team decided on how to test, but the whole team. They needed to look at the risk, the artifact and how that could be checked. As a result documentation was held to evaluations (reviews and inspections) and certain quality gates were set.
Evaluations focused on the documentation quality:
- Documents describing high risk exposure functionality or system components received more detailed reviews.
- In some instances particular sections of a document received extra attention due to the risk profile.
- Important defects were resolved in Requirements and Design documents so never got into the code (no faults forward).
- More effort was put into designing the components that had the highest risk exposure.
- Coding of high risk exposure technical components was completed first.
- Code reviews were more detailed for high risk exposure components.
- Each test team (including developers) distributed their effort by direct reference to the risks.
- Tests were designed based on the risk exposure. Functionality where no risks were raised was included as low risk.
- Selection of test techniques was based on risk as well as test stage e.g. equivalence partitioning or random testing for low risk and boundary value analysis for high risk exposure.
- More time was spent writing test cases for high risk exposure areas and much less effort spent documenting test cases for low risk.
- Executed tests for high risk exposure areas first. That left medium risk tests to the second iteration and tested low risk last.
- Regular test reporting statistics provided detail but also grouped together High, Medium and Low risk tests within a test phase.
- Project management could see a summary of progress and pass rates by risk exposure.
- Using the test design, business users could work out which specific risks had been mitigated and which hadn’t, but didn’t have to if they didn’t want to.
- Business users could easily understand what the test team had covered and therefore focus their business testing efforts without duplicating what was already done.
- A high level of trust was formed between the delivery team, test team and the business users.
The result was a shift in quality effort from most effort at the end (Testing and Acceptance) to one where quality measures started early in the application lifecycle.
Blue was the normal, traditional situation; most effort at the end. The new situation is green where most effort is early in the lifecycle.
Note: Why is Acceptance still more then Testing? Acceptance is an activity that must be done. Legally it’s needed and there for the effort is bigger compared to testing.
End of project
In the end the whole team tested the end-2-end chain of the application, the real business users were involved and the delivery team found 90% of the defects before any dynamic testing was done! Over 1500 defects were found during evaluations and code reviews. The delivery was successful! Go-live was on time and business users agreed sufficient testing was done. In the end no production incidents were encountered during the first 30 days after go-live.
What about the costs? Project financials were better than estimated; the work was delivered within the agreed budget.
So not only was the quality high (enough), costs were down and time to market was as planned. And the risks were down! Just by doing an extensive risks analysis.
No comments:
Post a Comment