Jul 31, 2012

Risk Analysis

Risk Analysis

In this tutorial you will learn about Risk Analysis, Technical Definitions, Risk Analysis, Risk Assessment, Business Impact Analysis, Product Size Risks, Business Impact Risks, Customer-Related Risks, Process Risks, Technical Issues, Technology Risk, Development Environment Risks, Risks Associated with Staff Size and Experience.
Risk Analysis is one of the important concepts in Software Product/Project Life Cycle. Risk analysis is broadly defined to include risk assessment, risk characterization, risk communication, risk management, and policy relating to risk. Risk Assessment is also called as Security risk analysis.

Technical Definitions:

Risk Analysis: A risk analysis involves identifying the most probable threats to an organization and analyzing the related vulnerabilities of the organization to these threats.
Risk Assessment: A risk assessment involves evaluating existing physical and environmental security and controls, and assessing their adequacy relative to the potential threats of the organization.
Business Impact Analysis: A business impact analysis involves identifying the critical business functions within the organization and determining the impact of not performing the business function beyond the maximum acceptable outage. Types of criteria that can be used to evaluate the impact include: customer service, internal operations, legal/statutory and financial.
Risks for a software product can be categorized into various types. Some of them are:

Product Size Risks:

The following risk item issues identify some generic risks associated with product size:
  • Estimated size of the product and confidence in estimated size? 
  • Estimated size of product? 
  • Size of database created or used by the product? 
  • Number of users of the product? 
  • Number of projected changes to the requirements for the product?
Risk will be high, when a large deviation occurs between expected values and the previous experience. All the expected information must be compared to previous experience for analysis of risk.

Business Impact Risks:

The following risk item issues identify some generic risks associated with business impact:
  • Affect of this product on company revenue? 
  • Reasonableness of delivery deadline? 
  • Number of customers who will use this product and the consistency of their needs relative to the product? 
  • Number of other products/systems with which this product must be interoperable? 
  • Amount and quality of product documentation that must be produced and delivered to the customer? 
  • Costs associated with late delivery or a defective product?

Customer-Related Risks:

Different Customers have different needs. Customers have different personalities. Some customers accept what is delivered and some others complain about the quality of the product. In some other cases, customers may have very good association with the product and the producer and some other customers may not know. A bad customer represents a significant threat to the project plan and a substantial risk for the project manager.
The following risk item checklist identifies generic risks associated with different customers:
  • Have you worked with the customer in the past? 
  • Does the customer have a solid idea of what is required? 
  • Will the customer agree to spend time in formal requirements gathering meetings to identify project scope? 
  • Is the customer willing to participate in reviews? 
  • Is the customer technically sophisticated in the product area? 
  • Does the customer understand the software engineering process?

Process Risks:

If the software engineering process is ill-defined or if analysis, design and testing are not conducted in a planned fashion, then risks are high for the product.
  • Has your organization developed a written description of the software process to be used on this project? 
  • Are the team members following the software process as it is documented? 
  • Are the third party coders following a specific software process and is there any procedure for tracking the performance of them? 
  • Are formal technical reviews are done regularly at both development and testing teams? 
  • Are the results of each formal technical review documented, including defects found and resources used? 
  • Is configuration management used to maintain consistency among system/software requirements, design, code, and test cases? 
  • Is a mechanism used for controlling changes to customer requirements that impact the software?

Technical Issues:

  • Are specific methods used for software analysis? 
  • Are specific conventions for code documentation defined and used? 
  • Are any specific methods used for test case design? 
  • Are software tools used to support planning and tracking activities? 
  • Are configuration management software tools used to control and track change activity throughout the software process? 
  • Are tools used to create software prototypes? 
  • Are software tools used to support the testing process? 
  • Are software tools used to support the production and management of documentation? 
  • Are quality metrics collected for all software projects? 
  • Are productivity metrics collected for all software projects?

Technology Risk:

  • Is the technology to be built new to your organization? 
  • Does the software interface with new hardware configurations? 
  • Does the software to be built interface with a database system whose function and performance have not been proven in this application area? 
  • Is a specialized user interface demanded by product requirements? 
  • Do requirements demand the use of new analysis, design or testing methods? 
  • Do requirements put excessive performance constraints on the product?

Development Environment Risks:

  • Is a software project and process management tool available? 
  • Are tools for analysis and design available? 
  • Do analysis and design tools deliver methods that are appropriate for the product to be built? 
  • Are compilers or code generators available and appropriate for the product to be built? 
  • Are testing tools available and appropriate for the product to be built? 
  • Are software configuration management tools available? 
  • Does the environment make use of a database or repository? 
  • Are all software tools integrated with one another? 
  • Have members of the project team received training in each of the tools?

Risks Associated with Staff Size and Experience:

  • Are the best people available and are they enough for the project? 
  • Do the people have the right combination of skills? 
  • Are staffs committed for entire duration of the project?

Jul 26, 2012

How can World Wide Web sites be tested?

Web sites are essentially client/server applications - with web servers and 'browser' clients. Consideration should be given to the interactions between html pages, web services, encrypted communications, Internet connections, firewalls, applications that run in web pages (such as javascript, flash, other plug-in applications), the wide variety of applications that could run on the server side, etc. Additionally, there are a wide variety of servers and browsers, mobile platforms, various versions of each, small but sometimes significant differences between them, variations in connection speeds, rapidly changing technologies, and multiple standards and protocols. The end result is that testing for web sites can become a major ongoing effort. Other considerations might include:

  • What are the expected loads on the server, and what kind of performance is required under such loads (such as web server response time, database query response times). What kinds of tools will be needed for performance testing (such as web load testing tools, other tools already in house that can be adapted, load generation appliances, etc.)?
  • Who is the target audience? What kind and version of browsers will they be using, and how extensively should testing be for these variations? What kind of connection speeds will they by using? Are they intra- organization (thus with likely high connection speeds and similar browsers) or Internet-wide (thus with a wider variety of connection speeds and browser types)?
  • What kind of performance is expected on the client side (e.g., how fast should pages appear, how fast should flash, applets, etc. load and run)?
  • Will down time for server and content maintenance/upgrades be allowed? how much?
  • What kinds of security (firewalls, encryption, passwords, functionality, etc.) will be required and what is it expected to do? How can it be tested?
  • What internationalization/localization/language requirements are there, and how are they to be verified?
  • How reliable are the site's Internet connections required to be? And how does that affect backup system or redundant connection requirements and testing?
  • What processes will be required to manage updates to the web site's content, and what are the requirements for maintaining, tracking, and controlling page content, graphics, links, etc.?
  • Which HTML and related specification will be adhered to? How strictly? What variations will be allowed for targeted browsers?
  • Will there be any standards or requirements for page appearance and/or graphics, 508 compliance, etc. throughout a site or parts of a site?
  • Will there be any development practices/standards utilized for web page components and identifiers, which can significantly impact test automation.
  • How will internal and external links be validated and updated? how often?
  • Can testing be done on the production system, or will a separate test system be required? How are browser caching, variations in browser option settings, connection variabilities, and real-world internet 'traffic congestion' problems to be accounted for in testing?
  • How extensive or customized are the server logging and reporting requirements; are they considered an integral part of the system and do they require testing?
  • How are flash, applets, javascripts, ActiveX components, etc. to be maintained, tracked, controlled, and tested?
Some sources of web site security information include the Usenet newsgroup 'comp.security.announce' and links concerning web site security in the 'Other Resources' section.
Hundreds of web site test tools are available and more than 560 of them are listed in the 'Web Test Tools' section.

Jul 17, 2012

SQL for Testers

The demand for "all round" testers, i.e. being able to test the system's functionality through traditional testing methods and being able to show some technical knowledge is growing.
Basics of Database testing contains the following:

1. How to connect to the database?
2. Ability to write simple queries to retrieve data and manipulate the data using DML operations.
3. Functional flow should be very well known!
4. Good knowledge on table level, column level constraints, ability to understand and execute complex queries related to joins is added advantage.
Contents of this tutorial:
1. INTRODUCTION to Database Testing
  • 1.1 Why back end testing is so important
  • 1.2 Characteristics of back end testing
  • 1.3 Back end testing phases
  • 1.4 Back end test methods
2. STRUCTURAL BACK END TESTS
2.1 Database schema tests
  • 2.1.1 Databases and devices
  • 2.1.2 Tables, columns, column types, defaults, and rules
  • 2.1.3 Keys and indexes
2.2 Stored procedure tests
  • 2.2.1 Individual procedure tests
  • 2.2.2 Integration tests of procedures
2.3 Trigger tests
  • 2.3.1 Update triggers
  • 2.3.2 Insert triggers
  • 2.3.3 Delete triggers
2.4 Integration tests of SQL server
2.5 Server setup scripts
2.6 Common bugs
3. FUNCTIONAL BACK END TESTS
  • 3.1 Dividing back end based on functionality
  • 3.2 Checking data integrity and consistency
  • 3.3 Login and user security
  • 3.4 Stress Testing
  • 3.5 Test back end via front end
  • 3.6 Benchmark testing
  • 3.7 Common bugs
4. Testing The Nightly downloading and Distribution jobs
  • 4.1 Batch jobs
  • 4.2 Data downloading
  • 4.3 Data conversion
  • 4.4 Data distribution
  • 4.5 Nightly time window
  • 4.6 Common bugs
5. Testing the Interfaces to Transaction APIS
  • 5.1 APIs' queries to back end
  • 5.2 Outputs of back end to APIs
  • 5.3 Common bugs
6. Other Database testing Issues
  • 6.1 Test tips
  • 6.2 Test tools
  • 6.2 Useful queries
Download the “SQL For Testers” tutorial from here

Before going through chapters 2 to 6, one should know the basics of SQL:
1. What are the difference between DDL, DML and DCL commands?
DDL is Data Definition Language statements. Some examples:
•CREATE - to create objects in the database
•ALTER - alters the structure of the database
•DROP - delete objects from the database
•TRUNCATE - remove all records from a table, including all spaces allocated for the records are removed
•COMMENT - add comments to the data dictionary
•GRANT - gives user's access privileges to database
•REVOKE - withdraw access privileges given with the GRANT command
DML is Data Manipulation Language statements. Some examples:

•SELECT - retrieve data from the a database

•INSERT - insert data into a table
•UPDATE - updates existing data within a table
•DELETE - deletes all records from a table, the space for the records remain
•CALL - call a PL/SQL or Java subprogram
•EXPLAIN PLAN - explain access path to data
•LOCK TABLE - control concurrency
DCL is Data Control Language statements. Some examples:
•COMMIT - save work done
•SAVEPOINT - identify a point in a transaction to which you can later roll back
•ROLLBACK - restore database to original since the last COMMIT
•SET TRANSACTION - Change transaction options like what rollback segment to use
Download the “SQL For Testers” tutorial from here
  • Why database testing is necessary?
  • Differences between backend testing and front end testing
  • Backend testing phases / Database Testing Phases
  • Backend test methodology / Database Testing methodology
  • Basics of SQL
Now lets put more focus on SQL Statements -
Section 1
1.1 Basics of the SELECT Statement
1.2 Conditional Selection
1.3 Relational Operators
1.4 Compound Conditions
1.5 IN & BETWEEN
1.6 Using LIKE
Section 2
2.1 Joins
2.2 Keys
2.3 Performing a Join
2.4 Eliminating Duplicates
2.5 Aliases & In/Sub-queries
Section 3
3.1 Aggregate Functions
3.2 Views
3.3 Creating New Tables
3.4 Altering Tables
3.5 Adding Data
3.6 Deleting Data
3.7 Updating Data
Section 4
4.1 Indexes
4.2 GROUP BY & HAVING
4.3 More Sub-queries
4.4 EXISTS & ALL
4.5 UNION & Outer Joins
4.6 Embedded SQL
4.7 Common SQL Questions
4.8 Nonstandard SQL
4.9 Syntax Summary
Download from here – SQL For Testers – Part 2

Agile Testing Methodology

Present trend in Software Development is racing towards achieving Targets, Quality and Customer Satisfaction within a limited time frame. This is mostly because the Business Scenario in today’s world is different from what it used to be, a few years ago. Most of the Product Development companies are now adopting a new age concept called “Agile Methodology” for the Software Development Life Cycle.
Software Testing has been a prime focus ever since the IT Industry has realized the importance of Quality of Deliverables, no matter what methodology we follow with the Software Development. However, since the Software Testing is a part of the Business Model, the Testing process also needs to change accordingly. Needless to say, we have so called “Agile Testing” as a result of such a Business Model.

Introduction to Agile Testing
While the given application under test is still evolving depending upon the customer needs, the mindset of the end user and the current market condition, it is highly impractical to go for the usual standard SDLC Models like Water Fall, V&V Model etc. Such models are most suitable for the Applications that are stable and non-volatile. The concept of “Time-To-Market” is the key word in today’s IT Business that compels the Software vendors to come up with new strategies to save the time, resources, cut down the cost involved and at the same time, deliver a reliable product that meets the user requirements. In this case, a reasonably good amount of end-to-end testing is carried out and the product could be acceptable with known issues/defects at the end of an intermediate release. These defects are harmless for the Application usability.
To adopt such a process in a systematic way, we have a new concept called Agile Methodology. This methodology continuously strives to overcome the issues of dynamically changing requirements while still trying to maintain a well-defined process.

The process is as follows:
1. The Customer prepares the Business Requirements and the Business Analyst or the Engineering team reviews it. Ideally, the Quality Assurance/Testing team is also involved in reviewing these requirements in order to be able to plan further stages accordingly.

2. During the Design and Implementation stages, the Engineering team writes User Stories and the analysis of issues at various stages. The Customer reviews these on regular basis and updates the Requirement specifications accordingly. The Testing team would follow up on regular basis at every stage until a consolidated documentation is prepared. This is to ensure that the Customer, the Engineering team and the Testing team are at the same page always and thus ensuring complete test coverage.

3. While the Engineering team starts the implementation, the Testing team starts with test planning, test strategies and test cases preparation. These would be properly documented and handed over to the Customer and the Engineering team for review. This is to ensure the complete test coverage and avoid unnecessary or redundant test cases.

4. As and when the Developer implements the code, the Testing team identifies if the application can be built using this code for a quick testing. This is to identify the defects at the early stage so that the developer can fix them in the next round on priority basis and continue with further development. This iteration continues until the end of the code implementation. Once the testing cycle starts, the Test team can now focus more on major test items such as Integration, Usability Testing and System Testing etc.

Scope of testing an application
The Testing team knows the complexity involved and it is accepted by the customer that the software development and/or the enhancements and hence the testing is a continuous process. Testing of the application at a black box level would suffice in order to identify the issues and raise the defects by the Testing team. The application continues to evolve until it reaches the stage of final acceptance. Hence the scope of testing would continue to evolve as per the Customer needs.

Process followed at various stages in the product life cycle
Every intermediate release of the product would be divided into two short cycles, usually of the duration of 40 days each. Each cycle would be executed in the following stages. The roles and responsibilities of every individual and the team are clearly defined for each stage.

- Design Specifications: The Testing team’s efforts would focus on performing any tool or process improvements and reviewing, understanding, and contributing to the nascent specifications.
- Implementation: While the Engineering/Development team is implementing the code, Testers would develop complete Testing Plan and Test Sets (set of test cases) for each of the features included in the cycle. Engineering features must be included; they would likely require some level of collaboration with the engineering feature developer. All Test Sets should be ready to execute by the end of implementation period of the respective cycle. After Test Set preparation, calculate the time estimation and prioritization for the Test Set execution based on the complexity and expected execution time for each test suite.
- While the test execution time estimation is notoriously difficult, this number should provide the Customer with a starting point for benchmarking.
- Testing/QA: Test Set execution, raising defects and follow up with the Engineering Team. End-to-end validation of the defects. Focus simultaneously on improving the quality of test cases. Watching out for and adding new cases as testing proceeds. Testing the software end-to-end to discover regressions and subtle systemic issues. Learning to focus more on using the time available to uncover the largest number of and most important bugs. Any deviation from the estimated time should be communicated across well in advance, so that the schedule can be worked upon depending upon the priority of the pending tasks. If there are certain issues or test cases blocking due to unknown errors, they would be differed until the beginning of next Testing/QA Cycle.
- Before acceptance: Follow up on ad-hoc requests/ changes in requirements on a regular basis, besides trying to complete the defined tasks.

Looking at various broad areas of testing of a complex application, the system structure, and the depth of functionality implemented and the level of complexity, the complete end-to-end Test Execution within a limited time frame would be next to impossible with any standard SDLC Model available. The Product/application involves a good deal of learning of how and when to use the application and requires that the end user know the functionality of various modules prior to constructing a User defined model (UDM) for his Business purpose or even for testing.


Reasons for Agile Testing methodology to test an Application

The testing wouldn’t be 100% complete in this case before the finished product reaches the hands of the end user. This is true especially when the target audience and its system structure are unknown. Different users have their own set of ideas and unique problems. Given this fact, it is hard to say that it is 100% bug free software when it reaches the customer. Hence, taking into account the constraints involved in using the standard SDLC Models, it is worth adopting an approach such as Agile Testing, which is more suitable for the dynamically changing requirements.

Context Driven Testing and Rapid Testing:
This type of testing is what usually the Agile Testers incorporate. Context Driven testing is the one where the test scenarios are not known before hand. It mostly comes from the context in which the application is being executed. In our case, it is constructing the UDMs based on a given use case from the targeted end user and test it for the scenarios that are explained by his system configuration. It also includes constructing the UDMs based on any trouble-shooting discussion arising out of defects pointed out during the Customer acceptance and testing.

Rapid Testing is a process of defining our own strategies to test the Software, not necessarily following any specific Process or a Model. It focuses mainly on identifying the defects as quickly as possible rather than focusing on the end user requirements.

Heuristic approach is used in such cases. The tester uses his common sense and previous work experience to test the application at various levels in order to figure out where the application stands.

References: http://www.testing.com/agile

Jul 4, 2012

Selenium for Web Apps Testing

Everyday, there is something new in the software world. As QAs, we need to be on top of our profession. Once again, learning is the key to our continued success.
Last week, I had a chance to review ( again..) some of the most popular automation testing tools in the market today.
Based on the my own research and reviews, I am no longer surprised that the free open source tools are still leading the popularity over the more expensive proprietary software from big named vendors.
I picked a couple of tools which were favorites mentioned in a a couple of QA forums. Then I came up with my own list, namely – JMeter, Selenium, Watir, C# and Java.
In this post, we will concentrate on one tool called Selenium.
Learning Selenium was a real breeze.
  • First of all, there are so many sites dedicated on teaching the ins, outs and whys of Selenium.
  • Second,  it is simple to use with an easy to navigate interface that comes as an Add On to the Firefox browser
Even if you are new to QA but if you have some technical knowledge, then within a day you would be creating your first few Selenium test cases.
My verdict : Selenium is really a good tool for testing web applications. It is free, and there is an excellent community of technical and non technical users who are willing to share their knowledge and expertise.
Instead of making this post too long, I would suggest you get started so you can try it for yourself.
From your Google Browser, just type the following words :
  • First, “Selenium Tutorial”
  • Or “Selenium Installation” to be more specific
  • Then “Selenium Sample Test Cases”

Role of Tester in Agile Testing

The most challenging role to adapt to agile development is the role of tester, as agile development contradicts so many things that testers have been taught is “best practice”.
Many testers might have gained training or experience with water fall model and would have spent lot of time practicing V-Model where in the system testing correlates directly with the system specification and testing commences with the completion of software development (in general). In this case tester’s life is straight forward meaning ensure the product works as per specifications.
With agile life becomes a little more complicated, no comprehensive documentation (which were there earlier with traditional models), the feature details are with lest documentation and more of verbal collaboration. Testing starts at a very early stage of software life cycle and continues till product is being developed, in other words target is always in motion.
It’s a real challenge, this is what I feel. Substantiating it I can say upfront writing of test cases before the software is actually developed such that the acceptance test forms a significant part of requirements analysis, test will be automated at low (code) level and will be implemented by DEV’s. The most significant or the important part is the greater emphasis is on the automation of regression testing.
All of it demands changes in the role of a tester (Agile tester)
  • With test case upfront demarcation of requirement analysis and test analysis diminishes.
  • User stories dilute the difference between the test analysis and requirement scenario.
  • Automation has a significant role.
  • With unit test being automated the testers need to ensure the completion of tests and their appropriateness ( all important scenarios have been jotted down)
  • Avoid duplication as it increases overhead.
  • A proactive and collaborative approach of the team can unanimously make them say “All is well” at the end.
I would appreciate if any one can add more points this article.