Category Archives: Automation

Tracking test results from TestNG

The development team wants fast feedback on the latest release and you have a large suite of (TestNG) integration tests to run, analyze and report back on. Running the tests is usually the easy bit but without an automated approach to diff results against previous results you may miss a new or regression bug. To reduce the developer feedback response time I searched for a (open source) tool that would allow:

  1. Centralized tool to persist, view and perform analysis on testng result history
  2. Integrate smoothly with TestNG i.e. implemented
  3. Test runs from gradle command line
  4. Fast identification of changes in test results from previous test run, release etc
  5. Ability to track a single test over a number of builds

I found such a tool called cuanto.  The tool hasn’t been updated in a while but it works and the source is available on github. The setup was straightforward Win2008 server, MySQL and Jetty.  One minor hiccup is you cannot use the latest version of java with Jetty; I used 1.6_45. Setup:

  1. Install MySQL and create a database called ‘cuanto’
  2. Install jetty (uncompress it)
  3. Uncompress the cuanto-2.8.0.war into /jetty/webapps/cuanto
  4. Edit cuanto-db.groovy as per the INSTALL doc and drop it into  /jetty/webapps/cuanto/WEB-INF/classes
  5. Download mysql java driver (mysql-connector-java-5.1.28-bin) and drop into /jetty/webapps/cuanto/WEB-INF/lib
  6. Start server /jetty/java -jar start.jar
    1. The first time the server is launched it will create the tables in cuanto database
  7. Test with http://<servername&gt;:8080/cuanto

So, the tool is setup and we move onto integrating it with Gradle build.gradle.  Cuanto comes with a rest api that the listener jar uses to pass results to the jetty server. Reminds me of a similar approach I took to persist HP QTP (now called Unified testing tool) test results to a Sybase database back in BlackRock days.  Happy to have moved on from both ;-). The Gradle code snippet is below:

task cuantotest (type: Test){
 systemProperty "cuanto.url", "http://SERVERIPHERE:8080/cuanto"
 systemProperty "cuanto.projectkey", "key"
 systemProperty "cuanto.testrun.create", "true"

   useDefaultListeners = false
   ignoreFailures = true
   includeGroups 'test1'
   <strong>listeners << 'cuanto.adapter.listener.testng.TestNgListener'</strong>

 repositories {
   flatDir {
     dirs 'e:/tools/cuanto-2.8.0/adapter'

Thats it. run your task: gradle cuantotest and view your results on cuanto site.  I used the create test run id automatically option.

As this was a test I called the cuanto jar files locally hence the gradle files repo call; unfortunately the latest jars are not in maven central. You can always add those to your own organizations repository. A slightly better way to implement this is to use gradle’s files dependencies:

dependencies {
  runtime files('e:/tools/cuanto-2.8.0/adapter/cuanto-api-2.8.0.jar', 'e:/tools/cuanto-2.8.0/adapter/cuanto-adapter-2.8.0.jar')
  //added for cuanto
  runtime 'commons-httpclient:commons-httpclient:3.1'
  runtime 'net.sf.json-lib:json-lib:2.4:jdk15'
  compile 'org.testng:testng:6.8.7'

Last thought is think about how you want to structure your test results within cuanto from a group, project and test run perspective.

Stuff you need (all latest versions apart from Java):

  1. TestNG
  2. Gradle
  3. Cuanto  (
  4. Webserver (Jetty)
  5. Database (MySQL)
  6. Java

TestNG – How to create a single report for tests in gradle

If you are using testng in a gradle multi (sub) project scenario and want to centrally locate all your TestNG html reports into one folder, this is how I did it with latest versions of TestNG and Gradle (1.8)

apply plugin: ‘java’


//override the java plugin test task
test {
//runs tests across all sub-projects. good for CI build process.
ignoreFailures = true
//TestNG support in gradle java plugin


//new task, outside of the subprojects task, which creates a top level directory for all of your tests in sub projects
task testReport(type: TestReport) {
destinationDir = file(“reports/test”)
subprojects {
reportOn { tasks.withType(Test) }


ALM Jenkins Plugin

HP have released a Jenkins plugin for ALM, UFT, QTP and ST versions 11 and up.

This is a good move on HPs part – allowing automation to be run as part of a CI (or even a CD) process. I do think the outcome would be better for web/service testing as opposed to GUI testing, which does not typically support non-breaking changes like e.g. webservices do.

This does go some ways to increasing the value proposition for HP in the agile development process. But I do have to wonder if you are testing rest or webservices in conjunction with a web GUI would you even be looking at ALM, UFT or QTP as a solution?


Code Analysis for QTP Code

I’ve been interested in trying to bring some form of static code and language analysis to test automation for quite some time and have never found a suitable tool (vendor or open source) for HP QTP, which uses vbs  as it development language (for better or for worse).

Recently I found a tool (Test Design Studio), which support this out of the box. Much like the development community this allows automation teams to identify and eliminate technical debt (e.g. cyclometric complexity), while at the same time supporting targeted code reviews and provide training opportunities with the ultimate goal of producing stable and efficient automation code.  This also has the benefit of driving consistenancy of implementation and promotes mobility for team members.  I would also think that having this process in place would enable goverance around code quality for outsourcing or partnering with an external testing vendor.

Some examples of the built-in rules for langauge analysis:

  1. Promote use of the ‘Option Explicit’ statement to help enforce language rules
  2. Cannot make duplicate declarations of variables in the same scope
  3. Must use the various Exit statements in the proper context
  4. ‘Select Case’ constructs must have at least one ‘Case’ statement
  5. Use of ReportEvent for verification purposes is not allowed within the test
  6. The timeout for the ‘Exist’ method must be explicitly defined and should not exceed 10 seconds

The tool is extensible enough so new new rules can be added, which the vendor is very responsive on.  Maybe they will add a framework to do this?

A sample deployment would involve buying 1 license and running the tool across all your ALM qtp assets on a weekly basis.  Keeping track of the history will also allow you to identify trends and/or heatmaps.

I have not seen or heard if HP are thinking about this for QTP or UFT roadmap.

Link to vendor

manual and automated testing – bringing it all together

I’ve heard the following on occasion “can’t you just start automating?” Well I could but I won’t.

What I want to do in this blog is to describe the symbiotic nature of both practices; yes manual testing and automation although different skillsets are very much related and both need each other to be successful.

Lets look at what manual testing is. First off, the term itself is terrible as it implies a rote, tedious and laborious activity (If you have a better term let me know!).  This is so far from reality as the practice requires:

  1. Subject matter expert level of knowledge about the business domain, application(s), data, architecture and of course the systems being tested
  2. Analytical skills to take a disparate set of documentation (use cases,tech specifications etc) and turn it into detailed and repeatable tests
  3. Discipline to document tests and keep them maintained release after release
  4. Methodical approach to running tests manually (oh that’s where its comes from!) and capturing results with the objective of creating useful and actionable bug reports

Why does manual testing need automation? executing thousands of tests manually would take far too long and be too error prone.  The trick to it is to have the correct balance between manual and automated test execution; leveraging the best of both worlds to deliver test turnaround times in a fast and efficient manner.

ok lets look at automation. Go ahead call an automation engineer a ‘scripter’ see the eyes roll. Creating automation is a development exercise and to be successful requires expertise and knowledge on par with a developer-developer (if you know what I mean).  In fact, I think thats why automated testing has failed many times and has somewhat of a bad rep that it does not get treated with the same care and attention as software development. Allied with vendors pushing tools “record & playback so you will never need to code” syndrome.

Why does automation need manual testing?  The manual tests provide the specification for the automation developer to follow when performing the initial implementation and subsequent maintenance of the automated test. Starting to automate without (well exercised) manual tests in place is not a good practice and leads to eventual abandonment. I look at automated tests as the tip of the iceberg and all of the preceding steps (requirements analysis, q&a sessions, hard won biz knowledge, test case creation and maintenance etc) that the tester performs.

So please keep this in mind when a tester is asking for clarifications or more information!

Breaking down an automated test

In the context of a GUI functional automation here what I think an automated test is made up of:

  1. Application of test data
  2. GUI recognition
  3. Driving code
  4. Assertions
  5. Error and recovery
  6. Reporting

These concepts are applicable for other types of test automation too e.g. unit testing

Other automation features:

  1. Setup and teardown activities
  2. Function libraries

Test Data

Data driven testing has been around a long time and is a valuable technique to iterate over a large number of data permutations. For example, placing orders in an order capture application to cover various asset types and order attributes like buy/sell, sett date, price etc. I’ve seen data stored in many ways e.g. xls, cvs, xml and also in databases. I would usually go for a well-structured and labeled spreadsheet as this allows other people to easily create and maintain the data over time. For example, a business analyst could partner with the automation engineer to own the data input part of the automation.

GUI Recognition

The automation tool needs to know how to interact with the application under test and of course each automation tool implements this differently, whether it comes from a vendor or opensource. What I look for in a tool is if it has a centralized repository (maintenance), allows you to identify objects (think object spy), defined process around maintenance (GUIs change a lot!), and supports base lining against GUI versions (argh we need to test the rolled-back version!).

Driving Code

This is where the main part of the coding occurs for the test. For a GUI test this involves writing code to manipulate the GUI to perform actions, apply data, apply assertions and reporting.


A term borrowed from unit testing but I like and is the whole point of a “test”. GUI automation falls under the banner of blackbox testing, which tests system behavior by applying defined inputs and comparing outputs to expected values. Assertions include comparing data values, data types, gui properties etc. How assertions record the failure should be considered too and be specific to flag it as a failed assertion and not a test code issue for example.

Other types of assertions/tests you get for free in GUI testing. If a GUI workflow has un-expectedly changed the test will fail and you might have found a bug.

Error recovery

You have created a test run of 100 tests and have left for lunch; you come back and find the test run failed on the first test. Enjoy your lunch? Well in order to relax and really enjoy lunch make sure that your automation is able to move onto the next test in the sequence after suffering some type of failure, which run the gamut from hung browsers, unresponsive java apps, automation bugs (yes we can have bugs in our code too!).
We refer to this as recovery; with careful attention automation gets more robust the more it gets run. And remember to not hide/swallow any failures which could be bugs.


Everyone is interested in seeing the failed tests as that is an indicator of the application not operating as expected. Automation reporting should highlight failed tests, provide supporting evidence to give to the developer (because the test must be wrong, not my code 😉 like screenshots of error messages/videos/stack traces and allow drill down into the test step hierarchy.

Reusable Libraries

Reusing code is a fairly well known and (basic) development best practice and is aimed at allowing less experienced testers wire together automated tests without having to re-invent the wheel. The common libraries can be owned by a central automation team; depends on your organizations structure. Automation reusable code usually comes in two flavors: common and application level; and of course everyone has different terms!

Common code captures code for accessing data sources, data transformations, date handling, data-driving support etc. Application common code captures common workflows for an application or family of applications. Think of someone trying to develop a test from scratch and also having to create automation for a downstream application. Reusable code reduces the complexity of developing/programming and empowers a wider range of people to develop automated tests; I would caution though to also provide training and guidelines if you are trying this.  And I would emphasis a good level of documentation, usage guidelines, cookbooks, template tests etc

A specialized form of reusable code is an application invoke or launch, which is a call to handle starting an application and can perform some basic checks and trigger recovery if necessary.

Thats it for a quick overview of automation and I wanted to end with two thoughts on what I have found automation to be successful :

  • Use automation in the context of a defined set of manual tests as they serve to set the goals, data, type of assertions etc for the automation. Plus, you can hand them over to someone else to implement!
  • Strive to design automated tests to be independent from each other; of course there are situations where tests need to be stood up in a certain state. This is particularly helpful when you are running tests concurrently on a large number of machines.