Nov 29, 2012

10 Things to do in Sprint 0 (ZERO)

Quite often Scrum projects and teams assume that they can start developing as from day 1 of the project.
This is only partially true and depends on your definition of ‘Start of the project’.

If ‘Start of the project’ for you is defined as “we have our team ready, give me the prioritized backlog, we’ll estimate so we can start developing”, then you can indeed start developing immediately.
However in most of the cases, you don’t have a backlog, a team... ready when the GO for a project is given. In these cases you need to do a ‘Sprint 0’.

During Sprint 0, you need to do the set-up of your project and prepare the necessary deliverables to get started. Typically you’ll have these stories on your “Sprint 0” sprint backlog:

1. Who is or will be involved in the project?
Find out all stakeholders, project board member, Core Team members.

2. How will the project be organized?
Define your approach, communication plan, basic set-up …

3. What are the main risks & constraints, and assumptions
Clarify your risks, constraint and main assumptions (in time boxed sessions).

4. What is the purpose and scope of the project (vision)
High level vision of expected outcome and implementation.

5. What do I want to develop?
Gather the business requirements and prepare the product backlog.

6. Who do I need to produce the product?
Get your team or teams together.

7. By When do we need to deliver something (first release, full project?)
Develop a high level schedule in planned burn-down chart and target velocity.

8. How are we going to build it?
(if applicable, mostly for larger projects) Document a high level architecture overview with impacted applications and their data-flow.

9. How much money do we need?
Estimate the required budget to get your project delivered.

10. Is the technical set-up ready or available?
Prepare your development, test, deployment, CI frameworks and environments.

 A typical “Definition of Done” for a Sprint 0 could be:
  • Project Charter ready (containing vision, stakeholders, approach, …)
  • Product Backlog ready ( prioritized and estimated requirements)
  • Team ready (who is going to work on the project)
  • Architecture Overview done (applications and data/communication flow)
  • High level schedule / burn-down chart ready
  • High level business case ready
These are, according to me, the required steps to take to a controlled start of your project.
Do you have others in mind?
Resource:PM scrum

Nov 27, 2012

What is the needed quality? Or…?



Is it possible to test an application without a test environment? To ‘test’ as early as possible in the application lifecycle, before dynamic testing can even start? That’s something I’ve been looking into the last few weeks. Collecting information about doing as less as possible testing, but still getting the result needed. And not with more or higher risks. This post is part of my research.

At this client the test team was asked to execute the testing activity without a testing environment. They were creating a new application for the client, but the client didn’t want to invest in a new testing environment. As an environment they only had a production environment to be available. But the application was high profile and needed to be tested thoroughly. One could argue to test on the production environment as it was a new application, but that was not possible. It couldn’t be set ‘open’ for a long period and the interfaces were also live and that could have become tricky. So they decided for another approach.

Physical availability of the business users
The team decided to pursue a full risk based approach. With a risk based approach they were going to conduct very thorough and extensive risk assessments to determine how to check the artifacts for defects.
So they went on and held risk analysis assessments with the client. That meant that they needed business users to be physically available throughout the risk assessments, something that for most test projects is very, very difficult to organize. But the client accepted, because the test manager could show that if these people were not available for these sessions the project (and product) risks would skyrocket as a result of a missing test environment. All business users were available as people in the delivery process, like hardware and security; all involved in the project. 50-60 people attended the risk assessments and were attended by the Client Sponsors, Client User team, Project Delivery teams, and Live Support staff.

Risk Analysis Workshop
The actual setup were workshops to determine the risks and requirements. These workshops were split up. First a workshop was held that tried to determine the clients’ needs; the requirements. All people involved together decided on what the software needed to do and how it should work, functional and non-functional.
Next were various breakout sessions. In little groups all people involved tried to create very detailed risks. The key message of the sessions was ‘what would hurt you the most’; in other words what pain would you feel when the application would fail, what would be the impact. In those sessions specific expertises were distributed over the groups. As a result the risks were of very high quality and very detailed; initially 110 risks were drawn up, later interviews added another 21. At the end all needed to set the priority of the impact.
After these breakout sessions a rotation was done to have another group look at the risks from one group and add their ideas to it, using different colors to show the variations. Any discrepancies were discussed at the end. The complete risk assessment considered factors such as number of affected users, system recoverability, external visibility of problem etc. All stakeholders left the workshops with the same view of risk distribution.

Likelihood determined separately
As said this session only determined the impact of the risk, likelihood wasn’t determined as that was the job of the IT team and delivery team. About two weeks after the first risk analysis the technical people decided, in the same type of sessions, what the likelihood of these risks would be. These people were asked because assessing the likelihood is about technical complexity, familiarity with technology, development staff capability. So business users were not involved, as they didn’t have the needed view.
For any needed detail interviews provided the missing information. Business users were interviewed later in the design process to get a lower level of granularity on the high exposure risks. This added another 21 risks to the playing field. Reminding the team to go through their risk analysis again.

Evaluations and quality gates
The technical team then decided on how they were going to test the artifacts that were connected with these risks. With artifacts they meant the documentation or prototypes of the application. Because the whole technical team decided on the measures that needed to be taken they were bought in, not the quality team decided on how to test, but the whole team. They needed to look at the risk, the artifact and how that could be checked. As a result documentation was held to evaluations (reviews and inspections) and certain quality gates were set.
Evaluations focused on the documentation quality:
  • Documents describing high risk exposure functionality or system components received more detailed reviews.
  • In some instances particular sections of a document received extra attention due to the risk profile.
  • Important defects were resolved in Requirements and Design documents so never got into the code (no faults forward).
As a result also the delivery team had a shift in focus:
  • More effort was put into designing the components that had the highest risk exposure.
  • Coding of high risk exposure technical components was completed first.
  • Code reviews were more detailed for high risk exposure components.
  • Each test team (including developers) distributed their effort by direct reference to the risks.
  • Tests were designed based on the risk exposure. Functionality where no risks were raised was included as low risk.
  • Selection of test techniques was based on risk as well as test stage e.g. equivalence partitioning or random testing for low risk and boundary value analysis for high risk exposure.
  • More time was spent writing test cases for high risk exposure areas and much less effort spent documenting test cases for low risk.
  • Executed tests for high risk exposure areas first. That left medium risk tests to the second iteration and tested low risk last.
And management and business users also had to focus differently:
  • Regular test reporting statistics provided detail but also grouped together High, Medium and Low risk tests within a test phase.
  • Project management could see a summary of progress and pass rates by risk exposure.
  • Using the test design, business users could work out which specific risks had been mitigated and which hadn’t, but didn’t have to if they didn’t want to.
  • Business users could easily understand what the test team had covered and therefore focus their business testing efforts without duplicating what was already done.
  • A high level of trust was formed between the delivery team, test team and the business users.
Shift in quality effort
The result was a shift in quality effort from most effort at the end (Testing and Acceptance) to one where quality measures started early in the application lifecycle.

Blue was the normal, traditional situation; most effort at the end. The new situation is green where most effort is early in the lifecycle.
Note: Why is Acceptance still more then Testing? Acceptance is an activity that must be done. Legally it’s needed and there for the effort is bigger compared to testing.


End of project
In the end the whole team tested the end-2-end chain of the application, the real business users were involved and the delivery team found 90% of the defects before any dynamic testing was done! Over 1500 defects were found during evaluations and code reviews. The delivery was successful! Go-live was on time and business users agreed sufficient testing was done. In the end no production incidents were encountered during the first 30 days after go-live.
What about the costs? Project financials were better than estimated; the work was delivered within the agreed budget.
So not only was the quality high (enough), costs were down and time to market was as planned. And the risks were down! Just by doing an extensive risks analysis.

18 ways to test the login function


Multiple ways to test the Login Function

Last week I`ve visit 4 different customers, a telecom operator, a insurance company, a government department and a IT-company. With all these customers we had a discussion about security testing. At this moment these organizations don’t test application security enough.
In each of these conversations I explained how a functional tester can test the login function of an application and how a security (minded) tester would do that. Between these two there is a difference. Functional tester will stop after 4 or 5 test cases. A security tester will execute at least 19 or more test cases, with a totally different point of view.
This post describes a couple of these test cases. If you have more of them please let me know so I can add them to the post. If you are a (functional) tester you can use these examples during your test execution.

1) Correct combination of username and password
The most basic test is a correct combination of username and password, everybody knows the expected result. But once it will happen that even a good combination doesn’t work. Possibly caused by a miss configuration of the application.

2) Incorrect combination of username en password
For example a correct username and a wrong password. Or use a password that exists in the database but is linked to another user. If the combination is wrong, most of the times you don’t have access to the application behind the login function/page and you get an error message.

3) Correct username and empty password
Most of the applications are coded in a way that you don’t have access to the application with this test case. But in some situations “something unexpected went wrong please check the error log”

4) Correct password and empty username
In this case you could expect the same result as in the test case before. And this is true most the time and where a lot of (functional) testers will stop. Because the application fulfills the expectations that are given by the functional design and/or requirements. But there is much more. There are more risks and more opportunities to enter the application; even without logging in.

5) Check the meaning and information in the error
Information and information leakage is food for hackers. Specific error and stack traces are full of information that can be misused by hacker. There are three possibilities of errors that can occur if you have an incorrect login.
  • Incorrect combination of username and password
  • Incorrect username
  • Incorrect password
Only the first one is correct. The second and third are useful from the usability perspective but gives an evil one too much information. Because if you get the message password is incorrect you know already half the access code.

8 ) Correct login, logout and go back with the browser button
If a session isn’t closed at the servers end and you go back with the <BACK> browser button, sometimes you can go on with the session you created earlier in the process. This means the logout function isn’t functioning correctly. What can happen?
For example in an internet cafĂ©: if people start directly after somebody is ready and they go back they can access your personal data. Think about email functions that are on social network sites. If they change your password in a password you don’t know they can do what ever they want. They can see which registration emails from other companies thyere are in your inbox and ask for a new password
Tip: if you’re using internet at a public computer be sure what you’re doing and always close the browser after surfing on the internet.

9) Go directly to a page without use of the login function
Maybe this test case is a little bit strange but it will work many times. What to do: Login with a correct combination of username and password. Go to a specific page behind the login functions that requires the login. Copy and paste the URL and logout. Now you start the test and you paste the address in the URL field (without using the login page). If you go directly to this page without logging in, the owner of the application has a big problem and also all the users that have an account. Everybody that knows the direct URL can have access.

10) Check the sustainability of the session
What time is needed to do business in an online shop? 2 hours without any actions at a site? 2 days? 2 months? I know some webmail clients whereby the session is valid for many years. This is not an issue people can misuse directly, but in combination with other vulnerabilities they can misuse it easily. Therefore log in with a valid account, go away drink a coffee, a beer and have a sleep. If you come back and the session is still valid (when you can use it without a new login) you have an issue!
Tip: In my opinion  more than 20 minutes without any actions at a specific site should lead to a new session.



11) Check if the login function is already HTTPS
The ‘S’ from HTTPS stands for Secure. This means the message is encrypted during transport. If the login page is HTTP and the conversation after login is HTTPS the first step is unsecure. Because the message is in plain text the first request to the server.
The login page (before you fill the fields with your credentials) must be HTTPS.

12) Change the ID into an ID of another account
If you’re browsing through an application and you see in the header or in the URL an ID that is directly related to your bank ID, personal ID username or anything else, try to change is in someone else. This is also possible a lot. A couple of months back a colleague of mine tested a banking application where this happened. If he wanted he could use every account that client had for transferring money and much more. To test this login a couple of times and check of if the ID is static (each time the same) if this happens you can try this test case and maybe it works fine (thus wrong)!

13) Copy the ID and past this in another browser
If you can go on with the session in a new browser you have also a big problem. People that steal you session can go on with this doing a lot of things you don’t like. With another browser I mean really on another browser and not only on other tab in your current browser. The best way to test this is sending it to another computer and go on with this session (don’t shut down the session in the first browser before the second one is open).

14) Delete the ID and continue browsing
Does they ID in the cookie or URL really has value in the conversation between your client and the server? Delete step by step all ID`s and continue browsing. Check which ID has really value. Hopefully this is not you account number of banking number. Because if this is true you can execute another test, changing the number.

15) Check the possibility of SQL-Injection
Sometimes everything works fine and all requirements are in place. But what is SQL-injection is possible and you can enter the application using a statement to login? Try in the application under test a single quote or 1’ or ‘1’=’1. If this becomes code instead of simple data at the background and the database execute this it happen sometimes you can login in. There are a lot more SQL-Injection possibilities. Search at the web for more differences.

16) Multiple times incorrect password
Locking accounts after 3 or 10 times a incorrect combination isn`t true or false. What more important is, is the requirement described in the functional or technical design? Because the choice of an account lock after 3 incorrect attempts isn`t true or false. But you have to know the consequences.
An application where a lock of account doesn’t occur  is capable for a brute force attack. This is an attack where hackers try to guess your username or password. They use a tool that systematically try all options.
In case an account lock another problem  occurs. This is a lockout. If a account isn`t available after three times a incorrect login you can`t use it anymore until you call the helpdesk, or start the procedure to get a new password. With this lockout you can make complete companies unavailable.
Locking account or not isn`t true or false but make the choice between them bases on the risk and mitigation you can take.

18) Login in two different browsers with the same account
Have you ever tried this in your application under test? Sometimes it happens that the conversation with the first one is broken, ore that action executed in the first are displayed in the second. This means most of the time you have one session (the same) at the server instead of two. Because you set up to different connections.
Most of the time does this mean that the procedure to set up a connection doesn’t work correct.

These eighteen test cases mostly can`t derived from the documentation or other oracles. Therefore knowledge and skills are important  to execute these test cases.
Requirements around these actions aren`t either in the documentation, therefore using formal test design techniques isn`t enough. You need also some creativity to explore the application.



If you have more test cases to test the login function let me know.

Resource: Testing the Future

Nov 21, 2012

Story from a company that built "the best software testing tool


I am sure you have heard of stories of UFO and Alien sighting from many parts of the world. Ask the sales team of this famous company that built "the best software testing tool" and they would tell you that alien visits are because of their tool. The aliens, according to the sales team, needs a one stop solution to all their automation testing needs and that is why they visit us.

If you think I am exaggerating, you are unnecessarily optimistic about your intelligence ;-P. Here is a story that I partly witnessed.

The company that built the best software testing tool, apparently, also built other software products. One of the software products they built was a product not related to testing but something that a lot of people need to use on their computers. They thought they needed some checking activity to be done. As an example, they wanted to check if certain links load as per mock. This was a no brain job and this company believes in automation. So did they automate those checks? If your guess is - yes - congratulations - you are wrong again.

They outsourced the work to some large services company in India and called it as "Compatibility testing work" while it wasn't. Large Indian services companies only care for three things - How long is the work? How many people can we bill? How will this help me answer investor questions on the revenue?

They neither care about testing nor about the future of testers they hire. The job for a tester on this project is to "check" whether the page loads, all images load and a bunch of other things load as compared to mock and design. So, testers have hundreds of links to open and give a report of how many links did not load properly.

What is happening to testers on this project?
  • The testers on this project feel /testing/ is boring job. They are made to believe that they are doing testing while they are not.
  • The only thing they have been doing over the last couple of years is to wait for the links to load and say "Pass" or "Fail". That's pure checking and not to be done by humans for those thousands of links.
  • The bigger problems are written below
  • As they try to move out, the world isn't considering them as testers because the only thing they have been doing is to wait for links to load.
  • Of course, those testers can put big brand names in their resume but they realized nobody cares for it anymore.
So, is that the end of the story? Nope! The above is just a premise of the actual story.

Several months later, the same large services company of India got an enquiry from one of their existing customer for testing page load compared to mocks but they did not need humans to do it, instead needed automation solution. Notice how carefully, I am not using the word "test automation", in previous sentence.

The services company agreed to do it and a team of automation testers by designation pulled out the best software testing tool and wrote a bunch of tests and at times record playback then tweak to get it done. One time effort and as long as the mocks or designs don't change - this solution works. Its shipped. Works well on rough seas.

Somebody within the services company noticed these two projects and said what I would have said, "Hey, the best testing tool serves the purpose of what we seem to be doing manually all this while" and then the businessmen inside "Sssh! No, it does not. We have a 5 year contract for 20 people and you know what that means to the business?" As far as I can tell you, it is in million(s) US dollars.

However, the company that built the best software testing tool is not a company that is ignorant of this. They are well aware of their tool capabilities but seem to fail in putting their own tool to use for another product within their company. When someone actually told this to the company that built the best testing tool - they were excited. Excited not because they could save some humans for what they were not supposed to be doing but to change their marketing communication with sentences like, "Case study of how a large services company used our best testing tool to test millions of links". Interestingly they show how the large services company saved millions of dollars for their customers whilst they themselves were paying millions of US dollars to get manual checking done where they did not want to use their tool.

Moral(s) of the story :
  1. Humans are made to run tests that humans were never supposed to.
  2. When automation kicks in - comparison to humans and automation also kicks in - indicating how poor their understanding of testing is.
  3. Business decisions can decide how boring testing can become.
  4. Most large services companies remain large at the expense of killing software testing and upcoming testers.
  5. Software testing tools are as useful as spoon and fork - they are needed everyday but shouldn't cost as much as a Ferrari.
  6. The world needs more bold people than just more skilled testers at the moment. So if you are focusing on training testers, don't just teach them testing.
  7. Those who trade their time for money and are designated as testers aren't testers anyway.
  8. Most often, companies that are proud of building the best or popular software testing tool are actually putting the field to shame.
  9. The best software testing tool is always the human brain. It can operate in "non thinking" mode too and unfortunately seems to be the popular mode among most testers since they are paid for it.                                                                                                                                                            Resource : TesterTested

Thoughts on Test Data

A number of RBCS clients find that obtaining good test data poses many
challenges. For any large-scale system, testers usually cannot create sufficient
and sufficiently diverse test data by hand; i.e., one record at a time. While datageneration
tools exist and can create almost unlimited amounts of data, the data
so generated often do not exhibit the same diversity and distribution of values as
production data. For these reasons, many of our clients consider production data
ideal for testing, particularly for systems where large sets of records have
accumulated over years of use with various revisions of the systems currently in
use, and systems previously in use.
However, to use production data, we must preserve privacy. Production data
often contains personal data about individuals which must be handled securely.
However, requiring secure data handling during testing activities imposes
undesirable inefficiencies and constraints. Therefore, many organizations want to
anonymize (scramble) the production data prior to using it for testing.
This anonymization process leads to the next set of challenges, though. The
anonymization process must occur securely, in the sense that it is not reversible
should the data fall into the wrong hands. For example, simply substituting the
next digit or the next letter in sequence would be obvious to anyone—it doesn’t
take long to deduce that “Kpio Cspxo” is actually “John Brown”—which makes
the de-anonymization process trivial.
In addition, Kpio Cspxo and other similar nonsense scrambles make poor test
data, because they are not realistic. The anonymization process must preserve the
usefulness of the data for localization and functional testing, which often
involves preserving its meaning and meaningfulness. For example, if the
anonymization process changes “John Brown” to “Lester Camden,” we still have
a male name, entirely usable for functional testing. If it changes “John Brown” to
“Charlotte Dostoyevsky,” though, it has imposed a gender change on John, and
if his logical record includes a gender field, we have now damaged the data.
Preserving the meaning of the data has another important implication. It must be
possible to construct queries, views, and joins of these anonymized data that
correspond directly to queries, views, and joins of the production data. For
example, if a query for all records with the first name “John” and the last name
“Brown” returned 20 records against production data, a query for all records
with the first name “Lester” and the last name “Camden” must return 20 records
against anonymized data. Failure to honor this corollary of the meaning and
meaningfulness requirement can result in major problems when using the data
Thoughts on Test Data 2 Copyright © 2010, RBCS, All Rights Reserved
for some types of functional tests, as well as any kind of performance, reliability,
or load test.
Even more challenging is the matter of usefulness of the data for interoperability
testing. Consider three applications, each of which have data gathered over years
and describing the same population. The data reside in three different databases.
The three applications interoperate, sharing data, and data-warehousing and
analytical applications can access the related data across databases. These
applications can create a logical record for a single person through a de facto join
via de facto foreign keys, such as full name, Social Security number, and so forth.
If the anonymization process scrambles the data in such a way that these
integrity constraints break, then the usefulness of the anonymized data for
interoperability testing breaks. Meaningful end-to-end testing of functionality,
performance, throughput, reliability, localization, and security becomes
impossible in this situation. For many of our clients, preserving the usefulness of
the data for interoperability testing poses the hardest challenge.
In addition, the anonymization process must not change the overall data quality
of the scrambled data. This is subtle, because most production data contains a
large number of errors. Some have estimated the error rate as high as one in four
records. So, to preserve the fidelity of the test data with respect to production
data, the same records that have errors must continue to contain errors. These
errors must be similar to the original errors, but must not allow reverse
engineering of the original errors.
A good test-data set has the property of maintainability, and so the anonymized
data must also. Maintainability of test data means the ability to edit, add, and
delete the data. This includes at the level of individual data fields and records,
and, if applicable, across the logical records that might span multiple databases.
To have the property of maintainability, the anonymization of the production
data should not make maintenance of the data impossible, of course, but
furthermore it should not make maintenance of the data any more difficult or
time-consuming than maintenance of the production data.
Two other practical challenges arise with the process of anonymization itself. The
first is the time and effort required to carry out the anonymization process. One
client told an RBCS consultant that they only refreshed their test data from
production every 12 to 18 months, because the test-data refresh process,
including the anonymization, required 4 to 6 person-months of effort and
typically took an entire month to complete. In an organization where staff must
charge the time spent on tasks to a particular project, few project managers felt
compelled to absorb such a cost into their budgets.
The next practical challenge of anonymization relates to the need to operate on
quiescent data. In other words, the data cannot change during extraction of the
to-be-anonymized data. This is nothing more complex than the usual challenge
Thoughts on Test Data 3 Copyright © 2010, RBCS, All Rights Reserved
of backing up databases, but the people involved in producing the anonymized
test data must be aware of it.
Options for production-data anonymization include both commercial and
custom-developed tools. The selection of a data anonymization tool is like the
selection of any other test tool. One must assemble a team, determine the tool
options, identify risks and constraints for the project, evaluate and select the tool,
and then roll out the tool. In this case, these activities would typically happen in
the context of a larger project focused on creating test data entirely or in part
through the anonymization of production data. Our experience with RBCS
clients has shown that such a project requires careful planning, including
identification of all requirements for the anonymized data and the
anonymization process. An organization planning such a project should
anticipate investing a substantial amount of time and perhaps even money
(should commercial tools prove desirable). Trying to do a production-data
anonymization project on the cheap is likely to result in failure to overcome
many of the challenges discussed here. However, with careful planning and
execution, it is possible for an organization to use anonymized production data
for testing purposes.
Resource: SUT

Nov 20, 2012

As a tester learn to learn before you learn to teach.

This blog post is about how you could learn about a product or rather a project before you are confident of making a fresh mind understand the aspects of what you will be working or have worked upon.

Once you are into any project, its either your project lead/ team lead does the honor of making you understand what the product/project is about. Yes, I understand the frustration few would have gone through while getting trained in a class room based training for days together & sometimes months together. It would be quick run of a slideshow or a senior most person blabbering about only achievements that they have made in the project. And hence trying to put on the onus on the young minds as if a war is about be fought with situation being - "Crititcal critical critical....." until the project halts. I myself have undergone such situations in organisations that I had worked earlier, that those words (critical...) being used atleast thrice daily throughout the year.
  • Mindmap tool.
  • The view of this can ascertain the level of understanding one has before starting the project.
  • This mindmap covers most of the aspects that one has to learn about the project before anyone steps into it.
  • This can be used even as a training material for any fresh mind coming into the project. 
  • This shall also tell us about the aspects that are missed which can be covered to test the product better.
  • This could well be expanded into more minute details based on contexts.


Source:VividTest

Nov 15, 2012

Descriptive Programming-QTP

Object Repository:

♦  Object Repository(OR) stores the objects information in QTP.Object repository acts as a interface between the Test script
    and AUT in order to identify the objects during execution.

♦  QuickTest has two types of object repositories for storing object information:

    1. Shared object repository
    2. Local object Repository

Shared Object Repository:

 A Shared Object Repository(SOR) stores objects information in a file that can be accessed by multiple Test.Extension of file name is .tsr,This is the most familiar and efficient way to save objects.

Local Object Repository:

Local Object Repository stores objects information in a file that is associated with one specific action, so that only that action can access the stored objects.Extension of file name is .mtr.

Example:
'Username="start-testing.blogspot.in"
'Password="qtp" 
 
'***********************************Login to gmail account  using  Object Repository ***********************************
 
'Launch gmail
systemutil.Run "iexplore.exe","http:\\www.gmail.com"
 
'wait til browser Loads
Browser("Gmail: Email from Google").Page("Gmail: Email from Google").Sync
 
' Enter  Email id in Username Field
Browser("Gmail: Email from Google").Page("Gmail: Email from Google").WebEdit("Email").Set  "qtpworld.com"
 
'Enter password in Passowrd Field
Browser("Gmail: Email from Google").Page("Gmail: Email from Google").WebEdit("Passwd").Set  "qtp"
 
'Cick on the Sign In Button 
Browser("Gmail: Email from Google").Page("Gmail: Email from Google").WebButton("Sign in").Click

Descriptive Programming(DP):

♦ Entering Objects Information directly into the test script is called descriptive programming.In DP, we will be "manually"
   specifying the properties and values by which the relevant object will be identified. This way QTP won’t search for the
   properties data in the Object Repository, but will take it from the DP statement.

♦ Object Spy is used to get the properties and their values to uniquely identify the objects in the application. If we know such
   properties and their unique values that can identify the required object without ambiguity, we need not use Object Spy.

 Two ways of descriptive programming:

 1. Static Programming
 2. Dynamic Programming


1.Static Programming: We provide the set of properties and values that describe the object directly in a VBScript statement.

    Example:

'Username="start-testing.blogspot.in"
'Password="qtp" 
 
'*****************************Login to gmail account  using  Static descriptive Programing ****************************** 
 
'Launch gmail
systemutil.Run "iexplore.exe","http:\\www.gmail.com" 
 
'Assign object property  value to a variable pwd
pwd="Passwd" 
 
'Wait till browser loads
Browser("title:=Gmail: Email from Google").Page("title:=Gmail: Email from Google").Sync 
 
' Enter  Email id in Username Field
Browser("title:=Gmail: Email from Google").Page("title:=Gmail: Email from Google").WebEdit("name:=Email").Set  "qtpworld.com" 
 
'Enter password in Passowrd Field
Browser("title:=Gmail: Email from Google").Page("title:=Gmail: Email from Google").WebEdit("name:=" & pwd).Set  "qtp" 
 
'Cick on the Sign In Button
Browser("title:=Gmail: Email from Google").Page("title:=Gmail: Email from Google").WebButton("name:=Sign in").Click

2. Dynamic Programming:
We add a collection of properties and values to a Description object, and then enter the
    Description object name in the statement.
 
     Example:


'Username="start-testing.blogspot.in"
'Password="qtp" 
 
'*****************************Login to gmail account  using  Dynamic descriptive Programing *************************************
 
'Launch gmail
systemutil.Run "iexplore.exe","http:\\www.gmail.com" 
 
'Descriptive object to identify  Browser  with a particular title
Set  Dbrowser=description.Create
Dbrowser("micclass").value="Browser"
Dbrowser("title").value="Gmail: Email from Google" 
 
'Descriptive object to identify  Web page with a particular title
Set  Dpage=description.Create
Dpage("micclass").value="Page"
Dpage("title").value="Gmail: Email from Google" 
 
'Descriptive object to identify a  particular Web Button
Set  Dbutton=description.Create
Dbutton("micclass").value="WebButton"
Dbutton("name").value="Sign in" 
 
'Descriptive object to identify  Web Text Box
Set Dedit=description.Create
Dedit("micclass").value="WebEdit"
Dedit("name").value="Email" 
 
'wait till browser loads
Browser(Dbrowser).Page(Dpage).Sync 
 
' Enter  Email id in Username Field
Browser(Dbrowser).Page(Dpage).WebEdit(Dedit).Set  "qtpworld.com" 
 
Dedit("name").value="Passwd" 
 
'Enter password in Passowrd Field
Browser(Dbrowser).Page(Dpage).WebEdit(Dedit).Set  "qtp" 
 
'Cick on the Sign In Button
Browser(Dbrowser).Page(Dpage).WebButton(Dbutton).Click

Which approach is best in QTP, Object Repository vs. Descriptive Programming  ?


♦ There really is no “best way”.

♦ Use the method that gives your company the best ROI(Return On Investment), whether that be Object Repository (OR),
   Descriptive Programming (DP) or a mixture of both.

When to use Descriptive Programming?

Following are some of the Scenarios where Descriptive programming is used:

Scenario #1:  Suppose we need to start Automation before Build Release.
                     OR:- There is no application to create Object Repository.

Scenario #2:   If the Application under test is having Dynamic Objects.
                     OR:- Difficult to handle Dynamic Objects using Object Repository.

Scenario #3:  When the application under Test is having objects that are adding in the Run Time.
                     OR:- We can’t add objects to Object Repository in run time.

Scenario #4:   If Application under test is having similar type of objects or similar name objects.
                     OR:- Object Repository will create multiple objects with same description unnecessarily.

Scenario #5:   When Application under test have more objects to perform operations.
                     OR:- The performance will decrease if object repository is having huge number of objects.

Advantages of Descriptive Programming:

1. Version Free: Script can be executed in any version of QTP without any changes.

2. Code Portability: Just code is enough to run script. We can copy and paste in other scripts for any other new requirement.

3. Reusability of Properties: We can assign properties to a global variable and reuse it for same type of objects.

4. Plug & Play: Any time scripts will be in ready to run state. No need to worry about any other settings or files.

5. Just Maintenance of variables: Store the object properties in the form of variables in a txt / vbs file, need to maintain the file.

Child Objects in QTP:

Child Objects in Excel:

The object model used by QTP reflects the way applications are implemented.

Example: Object model hierarch for MS Excel is Workbook>>Worsheets>>Cells

Child Objects in Webpage:

GUI controls (objects) are located in container objects, such as a WebEdit in a Web Page, which is in turn contained
within a Browser object.Hence, such a WebEdit is a child object of the Page object, which is in turn a child object of the
Browser object.


Example: Object model hierarchy for Gmail login Web page is Browser>>Page>>Webedit

Child Objects in Window:

Any of the object controls like a button, checkbox, radiobutton, combobox, frame etc of the application that can be referred
only from its parent object are called child objects.
Example:

A button may have been placed directly on a window. So, the window will be a parent object and the button will be a
child object. OR
Example:

A group of radiobuttons may have been placed under a frame and that frame is placed on a window. So, the window will
be a parent object of the frame and the frame will be the parent object of the radiobuttons. Radiobuttons will be the child objects.

Example:
Username="start-testing.blogspot.in"
'Password="qtp" 
 
'*****************************Login to gmail account  using  Child Objects *************************************
 
'Launch gmail
systemutil.Run "iexplore.exe","http:\\www.gmail.com" 
 
'Descriptive object to identify  Browser  with a particular title
Set  Dbrowser=description.Create
Dbrowser("micclass").value="Browser"
Dbrowser("title").value="Gmail: Email from Google" 
 
'Descriptive object to identify  Web page with a particular title
Set  Dpage=description.Create
Dpage("micclass").value="Page"
Dpage("title").value="Gmail: Email from Google" 
 
'Descriptive object to identify a  particular Web Button
Set  Dbutton=description.Create
Dbutton("micclass").value="WebButton"
Dbutton("name").value="Sign in" 
 
'Descriptive object to identify  Web Text Box
Set Dedit=description.Create
Dedit("micclass").value="WebEdit" 
 
'wait till browser loads
 Browser(Dbrowser).Page(Dpage).Sync 
 
'to find number of  webedit in gmail page
Set  Wtextbox = Browser(Dbrowser).Page(Dpage).ChildObjects(Dedit) 
 
NoOfTextbox = Wtextbox.Count
 
  For Counter=0 to NoOfTextbox-1 
 
     If Wtextbox(Counter).getroproperty("name")="Email" then
 
      ' Enter  Email id in Username Field
      Wtextbox(Counter).set   "qtpworld.com" 
 
    Elseif  Wtextbox(Counter).getroproperty("name")="Passwd" then
 
      'Enter password in Passowrd Field
      Wtextbox(Counter).set  "qtp"
 
      End If 
 
  Next 
 
'Cick on the Sign In Button 
Browser(Dbrowser).Page(Dpage).WebButton(Dbutton).Click
 
Resoure:QTPWORLD 

Checkpoints

Checkpoint enables user to identify whether AUT is correctly working or not by comparing current value of a particular property with expected value of that property.


Types of Checkpoints in QTP:

1 .Standard check point
2 .Bit map check point
3 .Text check point
4 .Text area check point
5 .Data base check point
6. XML Check point
7. Table checkpoint
8. Image checkpoint
9. Page check point
10. Accessibility check point



1. Standard Checkpoint:

Standard checkpoint enables users to check object property values.

Three ways to insert standard checkpoints:

a. In expert view,
b. In keyword view,
c. In Active screen.

Steps to follow for Inserting standard checkpoint:

QTP should be in recording mode -->Cursor should be placed in desired location -->Insert Menu -->check point -->Standard checkpoint -->Show the object -->click OK ->select property and enter expected results--> click OK--> Stop Recording.

Steps to follow for Editing standard checkpoint:

Identify Checkpoint statement and right click -->Select checkpoint properties option --> Modify the value -->click OK.

Steps to follow for Deleting standard checkpoint:

Identify Checkpoint statements and right click -->choose delete option.

Inserting Standard check points through active screen:

View -->Active Screen -->Cursor should be placed in desired location -->Mouse pointer is placed on active screen--> right click-->choose insert standard checkpoint option -->click OK -->enter expected result -->click OK


2. Bitmap checkpoint:

 Bitmap checkpoint enables user to compare two bitmaps. User can compare complete bitmaps as well as part of the bitmaps.

Steps to follow for Inserting bitmap checkpoint:

QTP should be in Recording mode --> Insert menu --> Checkpoint --> Bitmap checkpoint --> show the Bitmap -->click OK -->select “check only selected area” option if we want to compare part of the bitmap --> click OK -->stop recording.


3. Text Checkpoint:

Text checkpoint enables user to Check object’s text property value in different ways.

Steps to follow for inserting Text checkpoint:

QTP should be in Recording mode -->Insert menu --> checkpoint --> Text checkpoint --> Show the object --> click OK --> Select options --> We can select one or more options --> click OK--> stop Recording.


4. Text Area Checkpoint:

 Text Area checkpoint enables user to check the text area present in the application.

Steps to follow for inserting Text Area Checkpoint:

QTP should be in Recording mode --> Insert menu--> Checkpoint --> Text area checkpoint --> Mark the area of text --> select one or more options --> Click ok --> stop recording.


5. Database checkpoint:

 Database checkpoint enables user to check the Content of the back end Database.

Steps to follow for inserting Database checkpoint:

QTP need not be in Recording mode and we do not need AUT since data is from backend.

Insert --> checkpoint --> Database checkpoint -->choose “specify SQL statement manually” option -->click next --> click create --> select machine data source --> Select DSN (QT_flight32) --> click OK --> enter SQL statement (select * from orders) --> finish --> click OK.


6. XML Check point: 

XML checkpoint enables user to check content of the XML file.

Steps to follow for inserting XML Checkpoint:

QTP should be in Recording mode in web environment -->insert menu -->checkpoint (from application)-->show the xml pages >click OK -->stop Recording.


7. Table checkpoint:

 Table checkpoint enables user to check content of the web tables.

Steps to follow for inserting Table checkpoint:

QTP should be in Recording mode under web environment -->Insert menu -->checkpoint -->standard checkpoint >show the web table -->click OK >stop recording.


8. Image checkpoint:

Image checkpoint enables user to check the Image property values.

Steps to follow for Inserting Image Checkpoint:

QTP should be in Recording mode with web environment -->Insert menu -->checkpoint >standard checkpoint -->show the image -->select image -->click OK -->click OK >stop recording.


9. Page checkpoint:

Page checkpoints enables user to check number of Links, Images and Loading time in a web page.It is a hidden checkpoint. we can insert this through standard checkpoint.

Steps to follow for Inserting Page Checkpoint:

QTP should be in Recording mode with web environment -->Insert menu -->checkpoint -->Standard checkpoint >show the web page -->click OK -->click OK -->stop recording.


10. Accessibility checkpoint:

Accessibility enables user to check whether the webpage in our web application is developed according to W3C (World Wide Web consortium) Rules and Regulations or not.It is a configurable checkpoint, according to our requirements, we can customize.

Steps to Configure accessibility checkpoint:

Tools menu-->options-->web -->advanced -->check/uncheck items -->click apply -->click OK

Steps for Inserting Accessibility checkpoint:

Keep tool under recording mode with web environment -->insert-->checkpoint-->accessibility checkpoint-->show the webpage-->click OK-->click OK-->stop recording.


Result Analysis:

a. If item is available but not according to W3C rules then fail.
b. If an item is available, according to W3C rules then Pass.
c. If an item is not available then result would be pass


Points to remember
A Checkpoint is a confirmation or verification point in which the value of some property which is expected at a particular step is compared with the actual value which is displayed in the application. Based on the expected values Checkpoints are classified as follows

  • Page Checkpoint : A Standard Checkpoint created for a web page can be called a Page Checkpoint.  It is used to check total number of links & images on a web page. Page Checkpoints can be used to check Load Time i.e. time taken to load a web page.
  • Bitmap Checkpoint helps a user in checking the bitmap of an image or a full web page. It does a pixel by pixel comparison between actual and expected images.
  • Image Checkpoint enable you to check properties like source file location of a web image. Unlike , Bitmap Checkpoint  you can not check pixels(bitmaps) using image checkpoint.
  • Text Checkpoint is Used to check expected text in a web-page or application. This text could be from a specific region of the application or a small portion of text displayed
  • Accessibility Checkpoints verifies compliance with World Wide Web Consortium (W3C)  instructions and guidelines for Web-based technology and information systems. These Guidelines make it easy for disabled to access the web.
  • Database Checkpoints create  a query  during record time and database values are stored as expected values. Same query is executed during run time and actual & expected values are compared.
  • In Table Checkpoint , you dynamically can check the contents of cells of a table (grid) appearing in your environment. You can also check various table properties like row height , cell width and so on. Table Checkpoint is similar to Database Checkpoint
  • Using XML Checkpoints you can verify XML Data ,XML Schema,   XML Data
Resource:QTP world

Nov 2, 2012

Basics of VBscript

VBscript is the scripting language used in QTP. It is developed by Microsoft. VBscript is subset of VB (Visual Basic) and VBA(Visual Basic of Applications).
VBscript is used by other technologies also. For example, it is used in ASP (Active Server Page) for web site development. So we will be getting more ready-made functions/code written in vbscript from the Internet. It will save QTP script development time.
VBscript will access the host/system thro’ Microsoft’s Windows Script Host (WSH). We can use WSH scripts also in QTP. It can be effectively used to automate the Test scenarios such as rebooting the system automatically after doing some steps and locking the system automatically.
QTP recording feature will automatically generate VBscript code while recording the steps.
And, QTP IDE is having ‘Function Generator” for creating the vbscript functions.

VBScript Variables

In VBScript, all variables are of type variant, that can store different types of data.
Rules for VBScript variable names:
  • Must begin with a letter
  • Cannot contain a period (.)
  • Cannot exceed 255 characters
dim will be used for declaring the variable as below.
Dim TestCaseID
The value for this variable can be assigned as below
TestCaseID=”TC1″
Remember to use option explicit at top of your script. Otherwise a new variable will be created automatically if you misspell the variable name when assigning value for it.
We need to understand scope/lifetime of variable clearly. A variable declared within a function will exist only within that function. That means the variable will be destroyed when exiting the function, and more than one function can have variable with same name. So it is called as Local variable.
So, it is very important to have clear understanding about the scope/lifetime of variable declared/used in Test/Action/function library/datatable/environment.
Array variable can be declared as below.
Dim ArrIDs(10)
The above declaration will create single-dimension array containing 11 elements. i-e the array in vbscript is 0 based.

Operators

Arithmetic
Description
Symbol
Exponentiation ^
Unary negation -
Multiplication *
Division /
Integer division \
Modulus arithmetic Mod
Addition +
Subtraction -
String concatenation &
Comparison
Description
Symbol
Equality =
Inequality <>
Less than <
Greater than >
Less than or equal to <=
Greater than or equal to >=
Object equivalence Is
Logical
Description
Symbol
Logical negation Not
Logical conjunction And
Logical disjunction Or
Logical exclusion Xor
Logical equivalence Eqv
Logical implication Imp

VBScript Procedures


In VBScript, there are two types of procedures:
  • Sub procedure
  • Function procedure
A Sub procedure:
  • is a series of statements, enclosed by the Sub and End Sub statements
  • does not return a value
  • can take arguments
  • without arguments, it must include an empty set of parentheses ()
eg.
Sub displayName()
msgbox(“QualityPoint Technologies”)
End Sub
or
Sub addvalues(value1,value2)
msgbox(value1+value2)
End Sub
When calling a Sub procedure you can use the Call statement, like this:
Call MyProc(argument)
Or, you can omit the Call statement, like this:
MyProc argument
A Function procedure:
  • is a series of statements, enclosed by the Function and End Function statements
  • can return a value
  • can take arguments
  • without arguments, must include an empty set of parentheses ()
  • returns a value by assigning a value to its name
Find below a Sample function.
Function addvalues(value1,value2)
addvalues=value1+value2
End Function
The above function will take two arguments and will add those two values and then it will return the sum value. Note here the sum value is retured by assigning it to the function name.
The above function can be called as below.
msgbox “Sum value is ” & addvalues(1,2)

Conditional Statements


In VBScript we have four conditional statements:
if statement – executes a set of code when a condition is true
(e.g) if i=10 then
msgbox “I am 10″
End if
if…then…else statement – select one of two sets of lines to execute
(e.g) if i=10 then  msgbox “I am 10″
else msgbox “other than 10″
end if
if…then…elseif statement – select one of many sets of lines to execute
(e.g) if i=10 then  msgbox “I am 10″
elseif i=11 then msgbox “I am 11″
else msgbox “unknown”
end if
select case statement – select one of many sets of lines to execute
select case value
case 1
msgbox “1″
case 2
msgbox “2″
case 3
msgbox “3″
case else
msgbox “other than 1,2 and 3″
end select

Looping Statements

Use the For…Next statement to run a block of code a specified number of times.
e.g
for i = 0 to 5
msgbox(“The number is ” & i )
next
If you don’t know how many repetitions you want, use a Do…Loop statement.
The Do…Loop statement repeats a block of code while a condition is true, or until a condition becomes true.

Built-in Functions

VBscript is having many useful built-in functions.
You can refer this page for the complete list
inStr, isNull, LCase, Left, Len, Mid, Now, Replace, Split, UBound, CStr, CreateObject, Date and DatePart are functions that are most frequently used in QTP script development.

Tips to Optimize HP QTP 11.0 Scripts to Yield Better Performance

Tips to Optimize HP QTP 11.0 Scripts to Yield Better Performance
We use Automated-testing tools to optimize our manual testing processes. But in order to reap full benefits of any automated testing tool, we must know the complete ins and outs of the tool otherwise it would be a huge waste of money spent on automation. We have to learn the automation tool very thoroughly. We also need to learn the language of the automation tool to do coding more effectively and efficiently. I believe that a software testing tool is as good as the person who is actually using it.
I have been getting so many emails from my esteemed readers asking about HP QuickTest Professional tutorials, QTP tips and tricks etc. Some readers even complain that their QTP scripts are too slow to execute. This time I decided to write a post on how to use QTP more effectively which means how to make our QTP scripts perform better. In order words, this post will throw light on some points, which will optimize your QTP scripts.
Some of the QTP optimization tips can be:
Tip – 1: You should not use hard coded wait statement until absolutely necessary. Instead of the wait statement, you should use either exist or synchronization (sync) statements. The wait statement waits for the number of seconds, which have been provided. For example using wait(5) will wait for 5 seconds even if the browser gets into a ready state even after 2 seconds which means a waste of 3 seconds. Imagine how much time would be wasted if you have say 10 wait statements per script and you are running a batch of 500 scripts. A better alternative is using sync or exist statements for example:
Browser("").Page("").Sync
var=Browser("").Page("").Exist(2)
Never use the exist statement without a value as it will take the default object synchronization timeout value from QTP settings. You can navigate to these settings from File->Settings and then go to Run tab. So use Exist(0) instead of Exist(10). Moreover, I will suggest to set the global object synchronization timeout to 1 second. Tip – 2: Use declared variables instead of using variables on the fly. In order to enforce this in your scripts, use Option Explicit statement which forces the variable declaration. Moreover, using declared variables, scripts perform a bit faster. Also if you are using Option Explicit, it has to be the very first line of the code otherwise you will get an error.
Tip – 3: Using QTP for a longer period of time has a direct impact on the performance. It has been observed that a lot of Random Access Memory(RAM) gets consumed by QTP if QTP is running scripts for prolonged time. QTP starts eating system memory(memory leak) and sooner or later it will get hanged and we will be required to kill the qtpro.exe process and restart QTP all over again. In such a case, I will suggest you is to use QTP on computers with particular good amount of RAM and equally good clock speed.
Tip – 4: Do not load all addins while opening QTP. Use only the addins, which are required. This directly impacts QTP performance.
Tip – 5: I have personally experienced that opening QTP through a vbs file is faster than loading QTP through the icon.
Tip – 6: Switch to fast run mode. You can view this option in Tools->Options->Run. In fast mode, QuickTest Professional does not display the execution marker. In case you are running your scripts from Quality Center or QC, by default they will be run through fast mode.
Tip – 7: Disable the Smart identification feature.
Tip – 8: Switch off the video record option unless required as it will require fewer system resources. You can see this Option in QTP by navigating here Tools->Options->Run.
Tip – 9: Use of Active screen feature should be avoided, so that we increase QTP Tool performance.
Tip – 10: Instead of keeping the entire code in the same script, try to increase modularity by creating reusable components (Actions or functions) so that the code size can be reduced and also easier to maintain. To disable Action screen in QTP 11, go to Tools->Options->Active Screen and set the capture level to "None".
Tip – 11: Destroy the objects when you no longer need them. As objects take up relatively large amount of system resources, it is better to destroy them when you don’t need them anymore.
For example, refer the following code: Set objFSO = CreateObject("Scripting.FileSystemObject")
Set objRootFolder = objFSO.GetFolder("C:\")
Set objFSO = Nothing
Msgbox "The folder was last modified on :"& objRootFolder.DateLastModified
Set objRootFolder=Nothing
Notice from the above code that we need objFSO code just to retrieve the handle of "C" folder. As soon as you get the handle, you no longer need the objFSO folder. So instead of destroying this object reference at the last line, you should destroy when you don’t need the object reference anymore. You should follow the principle of limiting object lifetime as much as possible. Tip – 12: Creating an object reference increases the performance. For example, refer the following QTP code:
oEdit = Browser("Google").Page("Google").WebEdit("q")
oEdit.Set "Optimize QTP Scripts"
'The above code is definitely better in terms of performance than using the QTP code: Browser("Google").Page("Google").WebEdit("q").Set "Optimize QTP Scripts" You will not see major performance difference in this two liner code. To see a noticeable difference, you need to have hundreds of lines of QTP where you will see the difference. The reason is setting an object reference reduces the call to the Object repository. Tip – 13: I have also seen that using With statement in HP QTP increases performance but only upto a very small extent. In order to add With statement from QTP IDE, navigate to Tools->Options->General tab and select the option "Automatically Generate "With" statements after recording" option.
Tip – 14: Having too many objects in the Object repository/shared object repository slows down the QTP scripts. So the best option is to have only the desired objects in the Object Repository.
Tip – 15: Use local variables in functions rather than using global variables. The best practice is to limit the scope of a variable as much as possible.
Tip – 16: Try to make sure that your QTP code does not wait for events, which have already been executed. For example see the below QTP code:
iTimer = Timer
objWindow.Close
  Do
    If Dialog("micclass:=Dialog").Exist(0) Then _
        Dialog("micclass:=Dialog").Type MicEsc
  Loop Until (Not objWindow.Exist) Or (Timer-iTimer>20)
Instead of having a loop which will wait for 20 seconds for an event to occur, you should have a loop similar to the above. This loop will continue to loop until either of the condition is met: Either the timer has crossed 20 seconds or the window no longer exists.
I hope the above article has touched lot many QTP optimization tips. I don't claim to be an expert on QTP, however I have made an attempt to share the above tips based upon my experience with this great tool. Please feel free to add some points in the comment section if I happen to miss any. I will be glad to update this article giving appropriate credit to the people.
Like this article? If yes!!! Then please share it with your friends & colleagues on facebook and LinkedIn to promote your favorite website so that it reaches more audience.
Resource:ST genius.