Code Analysis for QTP Code

I’ve been interested in trying to bring some form of static code and language analysis to test automation for quite some time and have never found a suitable tool (vendor or open source) for HP QTP, which uses vbs  as it development language (for better or for worse).

Recently I found a tool (Test Design Studio), which support this out of the box. Much like the development community this allows automation teams to identify and eliminate technical debt (e.g. cyclometric complexity), while at the same time supporting targeted code reviews and provide training opportunities with the ultimate goal of producing stable and efficient automation code.  This also has the benefit of driving consistenancy of implementation and promotes mobility for team members.  I would also think that having this process in place would enable goverance around code quality for outsourcing or partnering with an external testing vendor.

Some examples of the built-in rules for langauge analysis:

  1. Promote use of the ‘Option Explicit’ statement to help enforce language rules
  2. Cannot make duplicate declarations of variables in the same scope
  3. Must use the various Exit statements in the proper context
  4. ‘Select Case’ constructs must have at least one ‘Case’ statement
  5. Use of ReportEvent for verification purposes is not allowed within the test
  6. The timeout for the ‘Exist’ method must be explicitly defined and should not exceed 10 seconds

The tool is extensible enough so new new rules can be added, which the vendor is very responsive on.  Maybe they will add a framework to do this?

A sample deployment would involve buying 1 license and running the tool across all your ALM qtp assets on a weekly basis.  Keeping track of the history will also allow you to identify trends and/or heatmaps.

I have not seen or heard if HP are thinking about this for QTP or UFT roadmap.

Link to vendor

Advertisements

The joy of JMeter Part 1

JMeter is an opensource testing tool that I have used recently and I felt like putting together some thoughts on its use and   some general performance testing guidelines on what I’ve seen to be successful. I’m going to avoid the opensource versus vendor tool discussion having successfully used HP, Borland, Parasoft tools and most recently JMeter.  Every organization and situation is different but the tool choice needs to take into account factors such as sdlc, release management process, technology stack, product (b2b/b2c) or internal app, team dynamics etc. For example, consider testing a mobile app as opposed to an internal trading system.

My experience lends itself to testing prior to production release but a growing trend in performance testing involves running performance testing in the production environment, leveraging a set of ‘cloud’ based assets. One company I like and have talked to is SOASTA, they a great group of people plus have developed a very impressive testing solution both from a GUI design and also feature set perspective.  This is a powerful and useful technique and hopefully I can get some exposure to this soon…

First off you need to plan, plan, plan. Really this is the most important aspect of any testing effort (and anything really worthwhile doing in life too…).   If you hear “well lets test until it breaks, that’s easy isn’t it?” I advise running for the hills! If that isn’t possible then getting everyone together and deciding on what the objectives are will really pay off when you are in the ‘throws of testing’ the release.

Here are what I see as some high-level discussion points for upcoming posts on this thread:

  • Performance Objectives
  • Test scenario and data design
  • Monitoring
  • Functional test while you go!
  • Know thy (technology) stack!
  • JMeter + plugins
  • JMeter sampler development
  • Test environment
  • Outsourcing your test lab
  • Running the tests
  • Compiling the run data
  • Reporting
  • Injecting performance testing to your sldc

Part 2 to follow on setting the performance objectives, which will drive the entire effort.

quality signals…

an interesting post on testing 2.0 from the folks over in google; using signals from various sources to predict code quality. I’ve been thinking about how to create a picture of composite quality from sources like sonar (static analysis) and testing (defect density, defects before and after release)

anyway, link to google testing blog below

http://googletesting.blogspot.com/2012/08/testing-20.html

manual and automated testing – bringing it all together

I’ve heard the following on occasion “can’t you just start automating?” Well I could but I won’t.

What I want to do in this blog is to describe the symbiotic nature of both practices; yes manual testing and automation although different skillsets are very much related and both need each other to be successful.

Lets look at what manual testing is. First off, the term itself is terrible as it implies a rote, tedious and laborious activity (If you have a better term let me know!).  This is so far from reality as the practice requires:

  1. Subject matter expert level of knowledge about the business domain, application(s), data, architecture and of course the systems being tested
  2. Analytical skills to take a disparate set of documentation (use cases,tech specifications etc) and turn it into detailed and repeatable tests
  3. Discipline to document tests and keep them maintained release after release
  4. Methodical approach to running tests manually (oh that’s where its comes from!) and capturing results with the objective of creating useful and actionable bug reports

Why does manual testing need automation? executing thousands of tests manually would take far too long and be too error prone.  The trick to it is to have the correct balance between manual and automated test execution; leveraging the best of both worlds to deliver test turnaround times in a fast and efficient manner.

ok lets look at automation. Go ahead call an automation engineer a ‘scripter’ see the eyes roll. Creating automation is a development exercise and to be successful requires expertise and knowledge on par with a developer-developer (if you know what I mean).  In fact, I think thats why automated testing has failed many times and has somewhat of a bad rep that it does not get treated with the same care and attention as software development. Allied with vendors pushing tools “record & playback so you will never need to code” syndrome.

Why does automation need manual testing?  The manual tests provide the specification for the automation developer to follow when performing the initial implementation and subsequent maintenance of the automated test. Starting to automate without (well exercised) manual tests in place is not a good practice and leads to eventual abandonment. I look at automated tests as the tip of the iceberg and all of the preceding steps (requirements analysis, q&a sessions, hard won biz knowledge, test case creation and maintenance etc) that the tester performs.

So please keep this in mind when a tester is asking for clarifications or more information!

Breaking down an automated test

In the context of a GUI functional automation here what I think an automated test is made up of:

  1. Application of test data
  2. GUI recognition
  3. Driving code
  4. Assertions
  5. Error and recovery
  6. Reporting

These concepts are applicable for other types of test automation too e.g. unit testing

Other automation features:

  1. Setup and teardown activities
  2. Function libraries

Test Data

Data driven testing has been around a long time and is a valuable technique to iterate over a large number of data permutations. For example, placing orders in an order capture application to cover various asset types and order attributes like buy/sell, sett date, price etc. I’ve seen data stored in many ways e.g. xls, cvs, xml and also in databases. I would usually go for a well-structured and labeled spreadsheet as this allows other people to easily create and maintain the data over time. For example, a business analyst could partner with the automation engineer to own the data input part of the automation.

GUI Recognition

The automation tool needs to know how to interact with the application under test and of course each automation tool implements this differently, whether it comes from a vendor or opensource. What I look for in a tool is if it has a centralized repository (maintenance), allows you to identify objects (think object spy), defined process around maintenance (GUIs change a lot!), and supports base lining against GUI versions (argh we need to test the rolled-back version!).

Driving Code

This is where the main part of the coding occurs for the test. For a GUI test this involves writing code to manipulate the GUI to perform actions, apply data, apply assertions and reporting.

Assertions

A term borrowed from unit testing but I like and is the whole point of a “test”. GUI automation falls under the banner of blackbox testing, which tests system behavior by applying defined inputs and comparing outputs to expected values. Assertions include comparing data values, data types, gui properties etc. How assertions record the failure should be considered too and be specific to flag it as a failed assertion and not a test code issue for example.

Other types of assertions/tests you get for free in GUI testing. If a GUI workflow has un-expectedly changed the test will fail and you might have found a bug.

Error recovery

You have created a test run of 100 tests and have left for lunch; you come back and find the test run failed on the first test. Enjoy your lunch? Well in order to relax and really enjoy lunch make sure that your automation is able to move onto the next test in the sequence after suffering some type of failure, which run the gamut from hung browsers, unresponsive java apps, automation bugs (yes we can have bugs in our code too!).
We refer to this as recovery; with careful attention automation gets more robust the more it gets run. And remember to not hide/swallow any failures which could be bugs.

Reporting

Everyone is interested in seeing the failed tests as that is an indicator of the application not operating as expected. Automation reporting should highlight failed tests, provide supporting evidence to give to the developer (because the test must be wrong, not my code 😉 like screenshots of error messages/videos/stack traces and allow drill down into the test step hierarchy.

Reusable Libraries

Reusing code is a fairly well known and (basic) development best practice and is aimed at allowing less experienced testers wire together automated tests without having to re-invent the wheel. The common libraries can be owned by a central automation team; depends on your organizations structure. Automation reusable code usually comes in two flavors: common and application level; and of course everyone has different terms!

Common code captures code for accessing data sources, data transformations, date handling, data-driving support etc. Application common code captures common workflows for an application or family of applications. Think of someone trying to develop a test from scratch and also having to create automation for a downstream application. Reusable code reduces the complexity of developing/programming and empowers a wider range of people to develop automated tests; I would caution though to also provide training and guidelines if you are trying this.  And I would emphasis a good level of documentation, usage guidelines, cookbooks, template tests etc

A specialized form of reusable code is an application invoke or launch, which is a call to handle starting an application and can perform some basic checks and trigger recovery if necessary.

Thats it for a quick overview of automation and I wanted to end with two thoughts on what I have found automation to be successful :

  • Use automation in the context of a defined set of manual tests as they serve to set the goals, data, type of assertions etc for the automation. Plus, you can hand them over to someone else to implement!
  • Strive to design automated tests to be independent from each other; of course there are situations where tests need to be stood up in a certain state. This is particularly helpful when you are running tests concurrently on a large number of machines.

Cheers!