Dec 21, 2014

Making a List, Checking It Twice: A Testing Strategy for the Holidays

Along with the festivities of the holiday season is the joy of shopping. We have so many options today with online, mobile, and in-store shopping. Changing consumer behavior and demographics mandates that retail technology must continuously evolve at a very fast rate.
This poses immense challenges for the testers in this industry who have to ensure that all channels offer a custom experience with high performance and quality in a very short time. Testing the right systems and integrated touch points, optimizing test coverage, using the right tools, benchmarking user performance, and ensuring multichannel scalability and security are core components of a holiday testing strategy.
Below are some key business and technology testing imperatives.
Financial systems testing: Test scenarios include ensuring sufficient cash flows are available for business and identifying the most profitable items. In particular, understanding scenarios relating to real-time sales performance by channel, store, geography, demographics, and inventory is critical. This is the most complex piece for testing because it necessitates significant domain knowledge.
Retail analytics systems testing: This includes functional and real-time performance scenarios of "big data," combining data from web browsing patterns, social media, industry forecasts, existing customer records, etc., to predict trends, pinpoint customers, and optimize pricing and promotions. Creating real-time data with the right set of tools and a team who understands analytics are essential for this type of testing.
Supply chain systems testing: Effective supply chain management (SCM) is the key element for a retailer to maintain profitability. Testing of the SCM IT systems in a real-time, production-like environment is crucial. Any error in order management or merchandise tracking has huge potential to bring the business down in minutes.
Online channels testing: In 2013, consumers spent $1.198 billion in desktop online sales on Black Friday. Testing all online channels for user experience, be it webpages or mobile, for functional and real-time performance is important. This is also the area that offers the most scope for automated testing, including functional, mobile, security, compatibility, and performance testing.
Merchandizing software testing: This includes test scenarios for forecasting and auto-replenishment of stores.
In-store testing: This primarily involves regression testing of point-of-sale systems and the connectivity to payment systems, inventory, and logistics, particularly for performance and security.
Testing the right elements by understanding the ever-changing consumer and mastering the integrations of all the technology components is the key to building a successful testing strategy for the holiday season.

Source: TechWell

QA and Testing Trends for 2014-2015

QA and Testing Trends for 2014

QA and Testing Trends
QA and Testing Trends
The year 2013 has come to an end and most of the IT professionals and businesses are interested to know how Quality Assurance and Testing domain fared last year. I am sure they also must be eager to find out how Quality Assurance will fare in the year 2014. Let’s begin with some interesting findings from a recent survey conducted by Gartner. Here’s my observation and analysis of this very specialized domain.

Cost Optimization

Cost Optimization is going to be a key factor the QA industry will focus even in the year 2014. QA functions are trying to acclimate to the business demands by streamlining and centralizing their structures to achieve higher efficiency and improve cost optimization. Keeping this in mind, businesses have been allocating more budgets to QA domains rather than traditional IT functions. QA and Testing budgets have now increased anywhere between 4-6%, whereas IT budgets are seen declining to 2-3% a year.

Process Optimization

The IT industry is witnessing a gradual transition, there is now an increased focus on QA and Testing rather than core product development. Going forward, QA and Testing would be more process centric and businesses would be keen to involve themselves in the entire testing process. We can definitely expect a collaborative approach towards work with the hybrid model with active involvement of business clients.

Technology Evolution

The mobile technology is evolving in quantum leaps and has made its impact felt in various aspects of our lives. Businesses are developing and launching mobile applications at frequent intervals, but they are lacking adequate mobile testing specialists to perform app testing resulting in increased complaints and bug reporting from the end users. The Industry is looking out for:
  • Streamlined and stabilized processes, where the scope of mobile testing is higher.
  • Developing test methods that help IT firms offer mobile testing solutions and strengthen the base for testing functions.
  • Businesses are keen for testing partners and professionals with experience on multiple platforms, multi devices, designing on cloud, network simulations.

Existing Challenges

Some of the other major challenges that were widely reported in the mobile testing domain were:
  • Absence of mature testing tools
  • Test automation is still a big challenge
  • Lack of concrete solutions for security and performance

Emerging Technologies

The next big buzzword of the year is CLOUD, though many business enterprises are not so keen to adopt cloud computing technology, but it is going to be the future of computing. Generally, businesses prefer the traditional way of testing applications, but they don’t realize that cloud testing is going to offer better solutions to them rather than conventional testing methodologies.

Cloud Testing

Although cloud testing has slowed down its pace due to various challenges that center around security and performance, which I am sure, would soon be addressed by the industry. Businesses are keen on investing in hardware and software, but this would not help them to a great extent, as there is an immense shortage of cloud testing professionals due to the increased demand. I am pretty sure all these challenges are going to be addressed this year.

Agile Environment

As we all are aware, Agile Development is now a widely adopted practice, but when it comes to testing domain, this is still a challenge. When we say Agile testing or testing in an Agile environment, particularly in relation to testing methodologies, this sphere is yet to be explored. Most of the software testing professionals cited various challenges while performing agile testing. The most critical challenges that were commonly reported include:
  • Lack of adequate testing approach
  • Difficulties in identifying the focus area for testing

Test Center of Excellence (TCOE)

TCOE, as we all know stands for Test Center of Excellence. This is going to play a crucial role in deciding the future of the software testing vendors. The statistics state that businesses are very keen on this emerging trend and are looking for IT partners, who have fully operational TCOE’s. Normally, it takes 3 – 6 months to build a TCOE and it takes about 3 years to taste the fruit of TCOE. If you are planning to create a TCOE, now is the time to initiate the process.

The Future of QA & Testing Domain

A Plethora of opportunities are being created for software testing with each passing day and this would certainly help the Indian IT firms involved in software testing. Verticals such as e-Commerce, Cloud Computing and Mobile Application Development are going to provide positive impetus to both the development and testing fraternities. It’s time for us to gear up to meet the challenges and taste the fruits of software testing market in 2014.

Soure:Evoke Tech

Sep 2, 2014

What is Agile Software Development and how does it impact testing?

Agile Software Development generally refers to incremental, collaborative software development approaches that provide alternatives to 'heavyweight', documentation-driven, waterfall-type development practices. It grew out of such approaches as Extreme Programming, SCRUM, DSDM, Crystal, and other 'lightweight' methodologies. In 2001 a group of software development and test practitioners gathered to discuss lightweight methods and created the 'Agile Manifesto' which describes the Agile approach values and lists 12 principles that describe Agile software development. In reality many organizations implement these principles to widely varying degrees (and with widely varying degrees of success) and still call their approach 'Agile'.
The impact of Agile approaches on software testing can also vary widely but often includes the following:
  • Requirements and documentation are often minimal and when present are often in the form of high-level 'user stories' and 'acceptance tests'.
  • Requirements can be added or changed often
  • Iterative development/test cycles ('sprints') are often in the range of 1-3 weeks. Both new functionality testing and regression testing (preferably automated) may occur within each iterative cycle.
  • Close collaboration between testers and developers, product owners and other team members
  • Short daily project status 'standup' meetings that include testers.
  • Common testing-related practices in agile projects may include test-driven development, extensive unit testing and unit test automation, API-level test automation, exploratory and session-based testing, UI test automation.
  • Testers may be heavily involved in fleshing out requirements details, including both functional and non-functional requirements.
For more info on Agile and other 'lightweight' software development approaches (XP, Scrum, Crystal, etc.) see the resource listings in the 'Agile Testing Resources' section of this web site.

Jun 28, 2014

The test automation divergence

The test automation divergence


  Over the last few years I have seen most of the organisations I work with choosing one of two main emerging paths within the test automation world. The first path is one I like to call “Collaborative Empowerment” and the second I call “Domain Driven Testing”.

Collaborative Empowerment
It can be argued the first path empowers the tester, encouraging them to be as technical as possible, while at the same time promoting cross-team collaboration through speaking a universally understood language (in that team/organisation anyway). This school of thought assumes that testers can code, in some cases better than the development team, and also communicates and write specifications equally as well. What I have just described is probably the perfect cross-functional resource, who would be able to do any role in a team and also may be considered a fulcrum of the team. This path is now known as Behavioural Driven Development (BDD) or Specification by Example (SBE). It has given rise to a raft of tools like JBehave, SpecFlow and Cucumber and was first adopted by real agile teams the world over to assist in the collaboration and produce automation as a by-product.

Organisations adopting this test automation path (whether as a goal or a by-product, preferably the latter) are generally more responsive to change and can adapt quicker. You will find these are commonplace in smaller organisations and tech companies. These companies are able to adapt quicker and deliver to market in less time than larger, more rigid companies, due to a number of reasons. It may be because of a lack of regulation in that industry, a diminished gap between the stakeholder/product owner and the software team or because of a less rigid process, which means there may be less restrictions over adopting newer processes and funnelling through change quicker.

Domain Driven Testing
Domain Driven Testing assumes the tester is a master of the business domain they are testing in and attempts to remove as much technical know-how as possible from the tester to achieve the desired automation. An example of this would be keyword driven test automation, where the tester, in theory, does not have to write realms of code to achieve the end state, but merely use keywords to create the code and drive which actions are desired to test the application under test.

There are a rising number of tools which do this and it is an easy sell, especially for anyone who has tried to hire “that perfect cross functional resource”. Undoubtedly, however, a smaller number of specialist automation resources will still be required to cater for those instances where the application under test annoyingly does not fit within the set of keywords or functions already defined within this tool.

The organisations adopting this trend tend to be more rigid and process driven than their counterparts adopting Collaborative Empowerment. Whether by regulation, distribution of resources, or lack of skilled resources they believe that to achieve their automation goal, it needs to be by having the tester focus on the testing on not being distracted by having to write lots of code. Having the tester focus on the domain at hand can have clear benefits, especially if the domain they are working in is particularly complex or specialist. It can be argued that these tools increase productivity, speed up automation implementation times and may improve return on investment quicker than the alternative.

The jury is out on which path will win the race. At the moment, anecdotal evidence suggests that more organisations are striving to adopt Collaborative Empowerment, mainly due to the benefit of promoting communication and quality in equal parts and the automation falling out as a by-product at the time of development. The trend is also that the tools supporting these methods or processes are largely open source, whilst the tools which generate automation for the tester tend to be expensive.
Domain Driven Testing can even encourage further disassociation between the software development team and testing happening at the end of the process rather than at point of development.

Source: TNO

Tricentis integrates Tosca Mobile+ testing

Tricentis, a global leader in enterprise software testing solutions that accelerate business innovation, today announced that it joined the SAP(R) PartnerEdge(R) program for Application Development. Tricentis Tosca Testsuite, Tricentis’ next-generation model-based testing product, is now available on the SAP Store, SAP’s online store with mobile apps, downloads, and demos from SAP and partners, making it easy for SAP customers to purchase Tosca.
During the past several years, Tricentis has helped more than 50 SAP customers globally — including A1 Telekom Austria, Allianz, BMW, Vienna Insurance Group, and OMV Group — to ensure the reliability of their software while still meeting ambitious software release timelines. SAP customers can leverage Tricentis’ next-generation model-based testing product, Tosca Mobile+, to eliminate application failure risk, accelerate the time to market in which they deliver applications, and decrease the costs required to test mobile applications from SAP.
Tosca Mobile+ enables SAP customers to test mobile apps from SAP with an end-to-end, model-based approach that has helped organizations achieve application risk coverage of more than 90 percent, a dramatic improvement over other methodologies. Tosca Mobile+ builds on Tosca Testsuite, which gives software test teams unprecedented power to measure, manage, and control risk coverage while replacing manual test scenarios with step-by-step state-of-the-art automated regression tests. Tosca Testsuite integrates with Application Lifecycle Management (ALM), and other testing solutions, and interfaces with the full range of applications demanded by today’s enterprise environments.
“As enterprise customers upgrade their SAP(R) systems and install service packs, there is a need for extensive testing of all their custom applications, often of which a large majority are mobile apps, which is time consuming and costly,” said Sandeep Johri, CEO, Tricentis. “Tosca accelerates the testing process and drastically reduces testing costs through model-based test automation of the end-to-end SAP environment.”

Source: MarktWatch

Mar 10, 2014

Experience as a QA in Scrum

Scrum is an Agile approach to software development, which focuses on delivering valuable business features in short development iterations of 2 to 4 weeks. Scrum teams have two defining characteristics – they are cross-functional (i.e. they include every skill set necessary to get the job done) and they are self-organizing (i.e. the team is expected to figure out how best to get the job done). After working for nearly two years as a quality assurance (QA) analyst on a Scrum team, I have learned that the role of QA in Scrum is much more than just writing test cases and reporting bugs to the team.
Contrary to the synchronous activities of a traditional Waterfall project, Scrum expects development activities to be performed in the order they are needed – i.e. asynchronously. This break from tradition raises a common question by many of the clients, developers, and business stakeholders new to Agile – “How can testing professionals engage effectively during a sprint before anything has been built?” This article focuses on explaining how the QA role performs agile testing and the place of importance they hold on a Scrum team. Much of what I have learned about the roles and responsibilities of QA on a Scrum team I will be sharing with you throughout this article.

More Than Just Building Test Cases

On traditional Waterfall projects the QA role is only involved at the very end of the project – once all coding is complete. On these projects, the QA role would typically be given a requirements document and the completed code and would be expected to write and execute test cases which verify that the application does exactly what the requirements document says. However, the QA role in Scrum is not just about executing test cases and reporting bugs.
On a Scrum team the QA Analysts participate and fulfill a variety of responsibilities conjointly with other team members. They are involved right from the very beginning of a project where they work closely with business analysts and developers. In Scrum, the QA role is not a separate team that tests the application being built. Instead the Scrum team is a cross-functional team where developers, business analysts and QAs all work together. Apart from building test cases, QAs can also help the Product Owner (PO) write acceptance test cases by playing the role of proxy product owner. Having QA fill in as a proxy for the Product Owner when they are not available helps keep the team moving forward at all times. They can also interact with the Product Owner by asking questions and challenging assumptions to help clarify the business requirements.

Participate In Estimating Stories

Quality assurance analysts are typically very good at creating test case scenarios based on user requirements. They also excel at identifying and capturing complex and negative test case scenarios. In fact, they are typically much better at it than developers, who tend to focus mostly on the “happy path” of the user story. Including testers during Release and Iteration Planning, when user stories are estimated, can help the team think beyond the happy path. This helps the team produce a more realistic estimate since both “happy” and “unhappy” paths have been considered. Estimation is a tough task and it is good practice to have the whole team participate in coming up with the estimate.

Help Keep Vision And Goals Visible

As the team works through the testing and stabilization activities, QAs should take the lead to plan, organize and involve the whole team in the testing work and to keep people motivated just as the Scrum Master (SM) does. Since very few developers enjoy testing work, the QAs , along with the Scrum Master, must make the testing vision and goals visible to the whole team and help to keep the motivation level of the team high. Sometimes it helps to be creative when testing scenarios require help from developers or other team members. Try making the testing activity fun by using amusing test scenarios, funny test data, fun competitions, etc. Do what you can to help the team enjoy and contribute to the testing work.

Collaborate With Customers And Developers

One of the primary responsibilities for the QA role is to provide feedback from their testing experience to the Product Owner as well as collecting feedback from them. QAs work closely with the Product Owner to help them develop detailed acceptance criteria for their user stories. Based on what the team learns during each sprint, QAs can also help the Product Owner modify or enhance existing user stories to better reflect the true requirements.
On occasion, the quality assurance analyst may be asked to act as a proxy for the Product Owner. In these situations, QAs and developers will sit and work together as a team to help improve the quality of the project. The QAs can pair up with developers for writing unit test cases and for discussing acceptance criteria. The more these roles work together, the greater the shared clarity will be on requirements. The increased clarity that results from working together will reduce the questions and doubts developers often encounter during coding time, which produces greater efficiency and a big time savings for developers and testers alike.
The whole team should be expected to pitch in and assist with testing whenever required based on the needs and the availability of team members. This practice helps create balance in the team and a shared responsibility for getting the work completed. It also helps produce the required pace to move faster with early testing feedback and increased quality.

Provide Fast Feedback

The build-test-fix cycle that traditional Waterfall teams repeat endlessly produces a lot of extra work for the team and usually ends up wasting a lot of time. This activity is much simpler in Scrum as QAs and developers work together throughout the entire process. Developers can consult the QAs about acceptance criteria or the expected behavior of any functionality from the user's perspective while working on the feature, which results in saved testing and bug fixing cycles.

Automate Regression Testing

It is often said that automation is the tester’s best friend since it provides repeatability, consistency, and better test coverage over the software’s functionality. This is particularly true on a Scrum project with small sprint cycles of 2-4 weeks, since QA generally has even less time to test the application. During each 2-week sprint, QA must perform full functionality testing of the new features being added during this sprint as well as perform full regression testing for all the previously implemented functionality. As would be expected, this responsibility grows significantly with each passing sprint so any automation that can be applied to these tests would greatly reduce the pressure the QAs feel.
Automated tests are particularly helpful in providing rapid feedback when teams implement Continuous Integration (CI). Each time there is a new build, the automated tests can be executed and provide immediate feedback as to whether the new features are working correctly or whether there have been any regressions of previously working functionality. Without automation, QA must perform these tests manually, which becomes a very monotonous and error prone task. Automation can help detect the defects early and give QA more time to explore the edge cases of the new features being developed. Automation can help QA deliver testing work much more efficiently and effectively.

Participate In Release Readiness/Demos

At the end of each sprint, the team holds a Sprint Review meeting where the team must demonstrate the user stories completed during the sprint to the Product Owner and other interested stakeholders. This Sprint Review meeting provides a healthy dose of accountability to the team and motivates them to complete as many user stories as possible.
With small sprint cycles of 2-4 weeks, everyone on the team must stay focused on his or her respective tasks in order to complete them on time. Developers stay busy developing their assigned user stories and fixing bugs while QA stays busy writing test cases, clarifying questions from Product Owners, and automating previous sprint stories. Having short sprint cycles also means that developers have less time to explore the complete functionality of their user stories on their own. As a result, developers typically consult with QA to better understand the user stories since they are the ones aware of the complete functionality and know each and every requirement and acceptance criteria. As a result, it can be a good practice for QA to perform the demo at the Sprint Review meeting and field functional questions coming from business. That can free up the developers to handle any technical questions that surface.

Analyze User Requirements

QAs are the proxy product owners of the Scrum team. QAs are generally good at understanding the business requirements from the user's perspective since they are often asked to use the application as the end users would. QA can help provide feedback to the Product Owner based on their testing experiences and can help the Product Owner better understand the application from the end user's point of view.

Enforce the Definition of Done

Having a clear Definition of Done (DoD) is important to a Scrum team. A DoD is nothing more than a simple list of team defined completion criteria - all of which must be completed before the user story can be considered done. This list typically includes things such as writing code, performing functional and regression tests, and gaining the approval of the Product Owner. A very simple DoD might include the following:
  • Code Complete
  • Unit Test complete
  • Functional / UI Test Complete
  • Approved by Product Owner
While it’s not the sole responsibility of QA to define the DoD, it is often QA’s responsibility to monitor the work being performed by the team and to ensure that each completed user story meets the benchmark DoD. An efficient Scrum team will review their DoD before starting each new user story to make sure they know what is expected. A team’s Definition of Done is not static and may evolve over time as the Scrum team needs evolve. DoDs can be defined for sprints and release as well.

Always Plan Testing With Testing Strategies

Since there is not a test lead or even a specific test team in Scrum, building a test plan or following specific test strategies on a Scrum team can be an issue. Scrum believes in preparing only enough documentation to support the immediate needs of the team. As such, QA will prepare just enough high-level documentation for test strategies and plans to guide the team. Since there are no QA leads in Scrum, the QA analyst typically decides the test strategies.

Tester and Analyst Roles Converging

On Scrum teams it is common to see the responsibilities of QA and those of the business analyst begin to converge. The Business Analyst role is typically responsible for creating and maintaining the sprint and product backlogs, analyzing the user stories from the business perspective, and prioritizing the backlogs with input from the Product Owner. The QA role, on the other hand, is typically responsible for defining / refining the acceptance criteria for each user story, testing the completed functionality each sprint from the end user's perspective, and ensuring all previously completed functionality has not regressed. As QA tests each user story and verifies the defined acceptance criteria has been met from the end user's perspective they are also analyzing the user stories in term of the business as well. So, in many ways the QA role and the business analyst role share many of the same responsibilities, required skills, and overall objectives.
Normally, a Scrum team gets their user stories and the pre-defined project scope from the Product Owner at the beginning of a project. However, in Scrum the team is encouraged to suggest new features or changes to existing features if it will improve the end users experience. The team is also encouraged to recommend changes to the priority / sequencing of user stories in the backlog when they find technical dependencies that suggest a different implementation order would be more efficient. Whether its defining requirements, analyzing user stories, defining / clarifying acceptance criteria, building acceptance test cases, or closely working with customers, the roles of tester and analyst are clearly converging. While this offers many advantages – particularly for small teams - it also has its disadvantages as well. The biggest concern is that neither the tester nor the business analyst role will get as much attention as it would with a team member fully dedicated to that responsibility.

Conclusion

While QAs still write tests and report bugs, they also support many other roles and responsibilities on the team. They are an important part of the team and are involved in the project right from the very beginning.
Working as a QA Analyst on a Scrum team for the past two years has been a great experience and has provided many learning opportunities. I have filled many different roles and responsibilities including QA analyst, proxy product owner, helping developers write unit test cases, acting as the team's quality conscience, and keeping track of problems and software bugs. In short, this experience has added many wonderful skills to my toolbox and has helped me learn how to play many different roles – all at the same time. Most importantly it has taught me to ask questions rather than just follow the documentation and do whatever it takes to help the team succeed.
While QAs still write tests and report bugs, they also support many other roles and responsibilities on the team. They are an important part of the team and are involved in the project right from the very beginning.

Source:InfoQ

Feb 28, 2014

Report: The State of Mobile App Development and Testing 2014


Mobile is in everywhere. If you want to be successful in your domain you need to put your business to mobile world too. Because the trend of mobile device usage is still and is going to be increased, people want to continue to use web application with their mobile devices while they are mobile. According to Flurry, leading domain in this trend is social media like facebook, twitter, foursquare and many others; and  the other popular domains are respectively utilities, entertainment,  e-commerce, games.

By SmartBear, a report about the mobile application development and testing is published. Let's look at what SmartBear does and how to benefit from this research the we can analyze the situation. SmartBear is software company, mainly focus on software development, software quality and system management tools. They have more than 10 famous products on the market. For software quality and testing, SoapUI, LoadUI and TestComplete are well-known tools. Based on my own experinece, these tools are very effective and stable. For mobile business, SoapUI is usable testing APIs, automating regression and functional test cases. The licence of SoapUI is cheaper than $400 for a year. They also have many open-source and free tools but normally free version of the tools have limited features, you can reach complete list of tools from this link.  As a brief, SmartBear has tools for mobile application testing and the increase in mobile development and increasing trends towards mobile application quality may also increase the SmartBear revenue. Therefore this research may show the reality but, be careful, it may direct us to see what they want us to see from the mobile market.

From the report, the important points can be listed as following:

    %30 of the application development firms develop mobile application
    more than half of mobile application development firm started mobile application in last two years
        it means there is trend for mobile application development
        %84 percent of non-mobile application development firms plan to enter mobile
    Challenge in mobile development
        first one is the quality of products with %20
        profit is 5th challenges with %11
            not a real problem they may think being in mobile world is key factor, but then?
        competition has %7
            who is mobile world is the winner? almost there no rival but then?
    defect is really a problem for user
        %19 of user delete application immediately
        %30 wait and delete if it is not fixed
    testing type
        manual VS automated = %28 VS 18
        API testing is %18
            if the API doesn't work, then what reason to test application?
    who test the application
        developers and testers = %64
        testers = %22
        developers = %8
            testers can not test everything in mobile application?
            developers also have testing responsibility such as unit test

 You can get the report from

http://www2.smartbear.com/inbound-testcomplete-state-of-mobile-testing-webinar-website.html?&_ga=1.1394231.742788738.1391667943.

Report: The State of Mobile App Development and Testing 2014

Mobile is in everywhere. If you want to be successful in your domain you need to put your business to mobile world too. Because the trend of mobile device usage is still and is going to be increased, people want to continue to use web application with their mobile devices while they are mobile. According to Flurry, leading domain in this trend is social media like facebook, twitter, foursquare and many others; and  the other popular domains are respectively utilities, entertainment,  e-commerce, games. 


By SmartBear, a report about the mobile application development and testing is published. Let's look at what SmartBear does and how to benefit from this research the we can analyze the situation. SmartBear is software company, mainly focus on software development, software quality and system management tools. They have more than 10 famous products on the market. For software quality and testing, SoapUI, LoadUI and TestComplete are well-known tools. Based on my own experinece, these tools are very effective and stable. For mobile business, SoapUI is usable testing APIs, automating regression and functional test cases. The licence of SoapUI is cheaper than $400 for a year. They also have many open-source and free tools but normally free version of the tools have limited features, you can reach complete list of tools from this link.  As a brief, SmartBear has tools for mobile application testing and the increase in mobile development and increasing trends towards mobile application quality may also increase the SmartBear revenue. Therefore this research may show the reality but, be careful, it may direct us to see what they want us to see from the mobile market.
From the report, the important points can be listed as following:
  • %30 of the application development firms develop mobile application
  • more than half of mobile application development firm started mobile application in last two years
    • it means there is trend for mobile application development
    • %84 percent of non-mobile application development firms plan to enter mobile
  • Challenge in mobile development
    • first one is the quality of products with %20
    • profit is 5th challenges with %11
      • not a real problem they may think being in mobile world is key factor, but then?
    • competition has %7
      • who is mobile world is the winner? almost there no rival but then? 
  • defect is really a problem for user
    • %19 of user delete application immediately
    • %30 wait and delete if it is not fixed
  • testing type
    • manual VS automated = %28 VS 18
    • API testing is %18
      • if the API doesn't work, then what reason to test application?
  • who test the application
    • developers and testers = %64
    • testers = %22
    • developers = %8 
      • testers can not test everything in mobile application?
      • developers also have testing responsibility such as unit test
 You can get the report from this link.

Read more at http://www.testrisk.com/2014/02/report-state-of-mobile-app-development.html#Qcbvi6kdQc1BAfRp.99
Mobile is in everywhere. If you want to be successful in your domain you need to put your business to mobile world too. Because the trend of mobile device usage is still and is going to be increased, people want to continue to use web application with their mobile devices while they are mobile. According to Flurry, leading domain in this trend is social media like facebook, twitter, foursquare and many others; and  the other popular domains are respectively utilities, entertainment,  e-commerce, games. 


By SmartBear, a report about the mobile application development and testing is published. Let's look at what SmartBear does and how to benefit from this research the we can analyze the situation. SmartBear is software company, mainly focus on software development, software quality and system management tools. They have more than 10 famous products on the market. For software quality and testing, SoapUI, LoadUI and TestComplete are well-known tools. Based on my own experinece, these tools are very effective and stable. For mobile business, SoapUI is usable testing APIs, automating regression and functional test cases. The licence of SoapUI is cheaper than $400 for a year. They also have many open-source and free tools but normally free version of the tools have limited features, you can reach complete list of tools from this link.  As a brief, SmartBear has tools for mobile application testing and the increase in mobile development and increasing trends towards mobile application quality may also increase the SmartBear revenue. Therefore this research may show the reality but, be careful, it may direct us to see what they want us to see from the mobile market.
From the report, the important points can be listed as following:
  • %30 of the application development firms develop mobile application
  • more than half of mobile application development firm started mobile application in last two years
    • it means there is trend for mobile application development
    • %84 percent of non-mobile application development firms plan to enter mobile
  • Challenge in mobile development
    • first one is the quality of products with %20
    • profit is 5th challenges with %11
      • not a real problem they may think being in mobile world is key factor, but then?
    • competition has %7
      • who is mobile world is the winner? almost there no rival but then? 
  • defect is really a problem for user
    • %19 of user delete application immediately
    • %30 wait and delete if it is not fixed
  • testing type
    • manual VS automated = %28 VS 18
    • API testing is %18
      • if the API doesn't work, then what reason to test application?
  • who test the application
    • developers and testers = %64
    • testers = %22
    • developers = %8 
      • testers can not test everything in mobile application?
      • developers also have testing responsibility such as unit test
 You can get the report from
Read more at http://www.testrisk.com/2014/02/report-state-of-mobile-app-development.html#Qcbvi6kdQc1BAfRp.99

Jan 9, 2014

The Bugs That Deceived Me

When I started my software development career, I was introduced to the big QA database. “The bug store” was where the testers stored all the bugs they found as well as those found by customers. I never thought there’s another way to work, until I moved to Typemock.
As a startup, we could choose whatever tools we wanted, and in the beginning, we used a wiki. Later on as the product grew in features, and thankfully with customers, we started looking for other tools. When I became a product manager, I decided the best way to deal with them is with an Excel file.
As much as I’d like to dismiss the big bad bug database (it’s not an “agile” tool), I can see a lot of resemblance between the two. It’s not about how the tool manages the information, it’s how we perceive it. Every time we look at the data, we perform an analysis that helps us make decisions, hopefully the right ones.
It is possible to make wrong decisions. Along the way, I’ve picked up a few traps where bug data mislead me to make bad decisions. These traps are in the data itself, not the tools, and can lead us in the wrong direction.

The Age of Bugs
When we start testing a new project, all bugs are comparable. That means that we can apply our analysis at the same moment in time. We can differentiate between high-priority and low-priority bugs and decide to fix the former, because at the time of our analysis, the former looked like a must-fix.
Of the products I have tested, the worst versions always seem to be the initial versions. I’ve a few of those projects, and the bug databases quickly filled with high-priority bugs; we didn’t get to fix all of them, either. We “managed scope,” cut some corners, and released a product. We also didn’t have the nerve to remove the high-priority (and sometimes the low-priority) ones from our database. A year later, we still had high-priority bugs in our system.
Yes, the database was cluttered. The more open bugs we had, the longer that triage and re-analysis (or “grooming” in agile-speak) took. The real problem was the list of high-priority bugs contained both old and new high-priority bugs. The truth is, of course, that the old ones were not really high priority, but we still compared them as if they were.
The logical way to deal with these bugs, and the one I’ve adapted through the years, is to go back and re-prioritize. Recently though, I’m more inclined to deleting as many bugs that I don’t see us handling in the very near future. Some don’t even go into the database, because the team closes the loop quickly and decides together that these bugs can wait. It’s still a struggle, both internal and within the team (“We need to keep this, so we don’t forget how important it is”).
Big databases seem bigger every time you go back to them. Make them as small as possible by removing the less important stuff. It may take some cold decisions, but it will focus the team on what’s important.
Bug data doesn’t just risk our decision-making process about what to handle next. It can also point us away from where we can really improve our process quality.

The Bug and Code Disconnect
I’ve managed bugs in different ways over the years. In all projects, they were never connected directly to the source code. This disconnect makes it hard to spot problems in specific parts of the code. The closest I got was the component level; I knew which component was more bug ridden than others. However, the code base was large and the information was not helpful in pinpointing problems. This was never a quantitative measure as bugs were usually tagged as belonging to components during analysis, but the real code changes were not logged. We could not rely on the tagging as a problem locator
Some application lifecycle management (ALM) tools do the connection: Once you have a work item for the bug, the code changes for the bug fix are kept under it. Yet, I found that extracting information from these tools is still hard and the information you get is partial.
Finding errors in the process around coding can save us loads of problems. We can avoid more bugs by diverting attention to the problem areas in coding, reviewing, and testing. I haven’t found a good tool for that yet, so I guess the solution is in the process; whatever tool you use, try to keep the bugs and related code connected and tagged correctly. If you can do that, you can do some very interesting analysis.
But that’s not all the data that gets lost.

The Lost Data
Here’s a shocker: all the bugs in our database were found during testing.
We officially call them bugs after we found them. But there are others that appear along the way that don’t get to that special occasion. These are the bugs that either the developer caught on the way, as part of coding, or the ones that were caught by the suite of automated tests.
“That’s what test suites are for, genius!”

Yes they are. And still, these unmentioned bugs can contribute to the same analysis. These bugs have been a blind spot for me. As I test the whole application, I don’t see them, and I’m definitely not aware of what happened before the code entered source control.
Because this data is lost, we‘re left with the bugs in the database.
To tell the truth, I’ve decidedly let this one go. Collecting all this information requires more attention, more data collection.
Instead, we discuss the big picture in a qualitative manner. Luckily, I work with a small team, and we do ongoing analysis of bugs as part of retrospectives. Although not accurate, these discussions help us to identify and handle the risky parts of the code.

More Data, Better Analysis

As a developer, I never thought about grouping bugs. When I found them in my code before anyone else, I didn’t even call it a bug. When I “grew up” and adopted a more encompassing point of view, I look at them differently.
Bug information doesn’t live in a vacuum. In agile we talk about context and how it’s part of information.
With a bug, we’re interested not just with the bug description, but also where and when it was found, how and who did the analysis, etc. We can then group bugs together to point us at quality problems.
Every once in a while, it helps to take a look at the big picture, not just look at the bugs individually. Bugs are usually symptoms of ineffective processes.

How Do I Start?
Start with simple questions about bugs like “Where do they come in droves?” and “Where do they not frequently appear?” After doing so, make a decision about what to track and follow up.
Then continue to ask questions. It’s not like these bugs are going away, are they?

sTICKYminds