Tag Archives: test automation

ALM Jenkins Plugin

HP have released a Jenkins plugin for ALM, UFT, QTP and ST versions 11 and up.

This is a good move on HPs part – allowing automation to be run as part of a CI (or even a CD) process. I do think the outcome would be better for web/service testing as opposed to GUI testing, which does not typically support non-breaking changes like e.g. webservices do.

This does go some ways to increasing the value proposition for HP in the agile development process. But I do have to wonder if you are testing rest or webservices in conjunction with a web GUI would you even be looking at ALM, UFT or QTP as a solution?

 

Code Analysis for QTP Code

I’ve been interested in trying to bring some form of static code and language analysis to test automation for quite some time and have never found a suitable tool (vendor or open source) for HP QTP, which uses vbs  as it development language (for better or for worse).

Recently I found a tool (Test Design Studio), which support this out of the box. Much like the development community this allows automation teams to identify and eliminate technical debt (e.g. cyclometric complexity), while at the same time supporting targeted code reviews and provide training opportunities with the ultimate goal of producing stable and efficient automation code.  This also has the benefit of driving consistenancy of implementation and promotes mobility for team members.  I would also think that having this process in place would enable goverance around code quality for outsourcing or partnering with an external testing vendor.

Some examples of the built-in rules for langauge analysis:

  1. Promote use of the ‘Option Explicit’ statement to help enforce language rules
  2. Cannot make duplicate declarations of variables in the same scope
  3. Must use the various Exit statements in the proper context
  4. ‘Select Case’ constructs must have at least one ‘Case’ statement
  5. Use of ReportEvent for verification purposes is not allowed within the test
  6. The timeout for the ‘Exist’ method must be explicitly defined and should not exceed 10 seconds

The tool is extensible enough so new new rules can be added, which the vendor is very responsive on.  Maybe they will add a framework to do this?

A sample deployment would involve buying 1 license and running the tool across all your ALM qtp assets on a weekly basis.  Keeping track of the history will also allow you to identify trends and/or heatmaps.

I have not seen or heard if HP are thinking about this for QTP or UFT roadmap.

Link to vendor

manual and automated testing – bringing it all together

I’ve heard the following on occasion “can’t you just start automating?” Well I could but I won’t.

What I want to do in this blog is to describe the symbiotic nature of both practices; yes manual testing and automation although different skillsets are very much related and both need each other to be successful.

Lets look at what manual testing is. First off, the term itself is terrible as it implies a rote, tedious and laborious activity (If you have a better term let me know!).  This is so far from reality as the practice requires:

  1. Subject matter expert level of knowledge about the business domain, application(s), data, architecture and of course the systems being tested
  2. Analytical skills to take a disparate set of documentation (use cases,tech specifications etc) and turn it into detailed and repeatable tests
  3. Discipline to document tests and keep them maintained release after release
  4. Methodical approach to running tests manually (oh that’s where its comes from!) and capturing results with the objective of creating useful and actionable bug reports

Why does manual testing need automation? executing thousands of tests manually would take far too long and be too error prone.  The trick to it is to have the correct balance between manual and automated test execution; leveraging the best of both worlds to deliver test turnaround times in a fast and efficient manner.

ok lets look at automation. Go ahead call an automation engineer a ‘scripter’ see the eyes roll. Creating automation is a development exercise and to be successful requires expertise and knowledge on par with a developer-developer (if you know what I mean).  In fact, I think thats why automated testing has failed many times and has somewhat of a bad rep that it does not get treated with the same care and attention as software development. Allied with vendors pushing tools “record & playback so you will never need to code” syndrome.

Why does automation need manual testing?  The manual tests provide the specification for the automation developer to follow when performing the initial implementation and subsequent maintenance of the automated test. Starting to automate without (well exercised) manual tests in place is not a good practice and leads to eventual abandonment. I look at automated tests as the tip of the iceberg and all of the preceding steps (requirements analysis, q&a sessions, hard won biz knowledge, test case creation and maintenance etc) that the tester performs.

So please keep this in mind when a tester is asking for clarifications or more information!

Breaking down an automated test

In the context of a GUI functional automation here what I think an automated test is made up of:

  1. Application of test data
  2. GUI recognition
  3. Driving code
  4. Assertions
  5. Error and recovery
  6. Reporting

These concepts are applicable for other types of test automation too e.g. unit testing

Other automation features:

  1. Setup and teardown activities
  2. Function libraries

Test Data

Data driven testing has been around a long time and is a valuable technique to iterate over a large number of data permutations. For example, placing orders in an order capture application to cover various asset types and order attributes like buy/sell, sett date, price etc. I’ve seen data stored in many ways e.g. xls, cvs, xml and also in databases. I would usually go for a well-structured and labeled spreadsheet as this allows other people to easily create and maintain the data over time. For example, a business analyst could partner with the automation engineer to own the data input part of the automation.

GUI Recognition

The automation tool needs to know how to interact with the application under test and of course each automation tool implements this differently, whether it comes from a vendor or opensource. What I look for in a tool is if it has a centralized repository (maintenance), allows you to identify objects (think object spy), defined process around maintenance (GUIs change a lot!), and supports base lining against GUI versions (argh we need to test the rolled-back version!).

Driving Code

This is where the main part of the coding occurs for the test. For a GUI test this involves writing code to manipulate the GUI to perform actions, apply data, apply assertions and reporting.

Assertions

A term borrowed from unit testing but I like and is the whole point of a “test”. GUI automation falls under the banner of blackbox testing, which tests system behavior by applying defined inputs and comparing outputs to expected values. Assertions include comparing data values, data types, gui properties etc. How assertions record the failure should be considered too and be specific to flag it as a failed assertion and not a test code issue for example.

Other types of assertions/tests you get for free in GUI testing. If a GUI workflow has un-expectedly changed the test will fail and you might have found a bug.

Error recovery

You have created a test run of 100 tests and have left for lunch; you come back and find the test run failed on the first test. Enjoy your lunch? Well in order to relax and really enjoy lunch make sure that your automation is able to move onto the next test in the sequence after suffering some type of failure, which run the gamut from hung browsers, unresponsive java apps, automation bugs (yes we can have bugs in our code too!).
We refer to this as recovery; with careful attention automation gets more robust the more it gets run. And remember to not hide/swallow any failures which could be bugs.

Reporting

Everyone is interested in seeing the failed tests as that is an indicator of the application not operating as expected. Automation reporting should highlight failed tests, provide supporting evidence to give to the developer (because the test must be wrong, not my code 😉 like screenshots of error messages/videos/stack traces and allow drill down into the test step hierarchy.

Reusable Libraries

Reusing code is a fairly well known and (basic) development best practice and is aimed at allowing less experienced testers wire together automated tests without having to re-invent the wheel. The common libraries can be owned by a central automation team; depends on your organizations structure. Automation reusable code usually comes in two flavors: common and application level; and of course everyone has different terms!

Common code captures code for accessing data sources, data transformations, date handling, data-driving support etc. Application common code captures common workflows for an application or family of applications. Think of someone trying to develop a test from scratch and also having to create automation for a downstream application. Reusable code reduces the complexity of developing/programming and empowers a wider range of people to develop automated tests; I would caution though to also provide training and guidelines if you are trying this.  And I would emphasis a good level of documentation, usage guidelines, cookbooks, template tests etc

A specialized form of reusable code is an application invoke or launch, which is a call to handle starting an application and can perform some basic checks and trigger recovery if necessary.

Thats it for a quick overview of automation and I wanted to end with two thoughts on what I have found automation to be successful :

  • Use automation in the context of a defined set of manual tests as they serve to set the goals, data, type of assertions etc for the automation. Plus, you can hand them over to someone else to implement!
  • Strive to design automated tests to be independent from each other; of course there are situations where tests need to be stood up in a certain state. This is particularly helpful when you are running tests concurrently on a large number of machines.

Cheers!