Tracking test results from TestNG

The development team wants fast feedback on the latest release and you have a large suite of (TestNG) integration tests to run, analyze and report back on. Running the tests is usually the easy bit but without an automated approach to diff results against previous results you may miss a new or regression bug. To reduce the developer feedback response time I searched for a (open source) tool that would allow:

  1. Centralized tool to persist, view and perform analysis on testng result history
  2. Integrate smoothly with TestNG i.e. implemented http://testng.org/javadocs/org/testng/ITestListener.html
  3. Test runs from gradle command line
  4. Fast identification of changes in test results from previous test run, release etc
  5. Ability to track a single test over a number of builds

I found such a tool called cuanto.  The tool hasn’t been updated in a while but it works and the source is available on github. The setup was straightforward Win2008 server, MySQL and Jetty.  One minor hiccup is you cannot use the latest version of java with Jetty; I used 1.6_45. Setup:

  1. Install MySQL and create a database called ‘cuanto’
  2. Install jetty (uncompress it)
  3. Uncompress the cuanto-2.8.0.war into /jetty/webapps/cuanto
  4. Edit cuanto-db.groovy as per the INSTALL doc and drop it into  /jetty/webapps/cuanto/WEB-INF/classes
  5. Download mysql java driver (mysql-connector-java-5.1.28-bin) and drop into /jetty/webapps/cuanto/WEB-INF/lib
  6. Start server /jetty/java -jar start.jar
    1. The first time the server is launched it will create the tables in cuanto database
  7. Test with http://<servername&gt;:8080/cuanto

So, the tool is setup and we move onto integrating it with Gradle build.gradle.  Cuanto comes with a rest api that the listener jar uses to pass results to the jetty server. Reminds me of a similar approach I took to persist HP QTP (now called Unified testing tool) test results to a Sybase database back in BlackRock days.  Happy to have moved on from both ;-). The Gradle code snippet is below:

task cuantotest (type: Test){
 systemProperty "cuanto.url", "http://SERVERIPHERE:8080/cuanto"
 systemProperty "cuanto.projectkey", "key"
 systemProperty "cuanto.testrun.create", "true"

 useTestNG()<{
   useDefaultListeners = false
   ignoreFailures = true
   includeGroups 'test1'
   <strong>listeners << 'cuanto.adapter.listener.testng.TestNgListener'</strong>
  }
 }

 repositories {
   flatDir {
     dirs 'e:/tools/cuanto-2.8.0/adapter'
   }
 }

Thats it. run your task: gradle cuantotest and view your results on cuanto site.  I used the create test run id automatically option.

As this was a test I called the cuanto jar files locally hence the gradle files repo call; unfortunately the latest jars are not in maven central. You can always add those to your own organizations repository. A slightly better way to implement this is to use gradle’s files dependencies:

dependencies {
  runtime files('e:/tools/cuanto-2.8.0/adapter/cuanto-api-2.8.0.jar', 'e:/tools/cuanto-2.8.0/adapter/cuanto-adapter-2.8.0.jar')
  //added for cuanto
  runtime 'commons-httpclient:commons-httpclient:3.1'
  runtime 'net.sf.json-lib:json-lib:2.4:jdk15'
  //testng
  compile 'org.testng:testng:6.8.7'
  ...
  ...
}

Last thought is think about how you want to structure your test results within cuanto from a group, project and test run perspective.

Stuff you need (all latest versions apart from Java):

  1. TestNG
  2. Gradle
  3. Cuanto  (trackyourtests.com)
  4. Webserver (Jetty)
  5. Database (MySQL)
  6. Java

Getting TestNG results reporting in Bamboo (in a Gradle build)

To get TestNG parser working in Bamboo do the following:

  1. In top level build.gradle
      • use ‘useDefaultListeners=true’ this produces the testng-results.xml file(s)*
      • Put everything in a code block under useTestNG()
  1. use ant style **/testng-results.xml as the file locator  in Bamboo’s

here is the build.gradle:
test {
useTestNG(){
useDefaultListeners = true
ignoreFailures = true
options.suites("src/test/resources/testng.xml")
}
}

* This works for a multi-project gradle setup as the testng.xml files are located in each sub-directory.

Answered my own problem on Atlassian site https://answers.atlassian.com/questions/236699/testng-not-picking-up-test-results-xml

TestNG – How to create a single report for tests in gradle

If you are using testng in a gradle multi (sub) project scenario and want to centrally locate all your TestNG html reports into one folder, this is how I did it with latest versions of TestNG and Gradle (1.8)

/////////////////////////////////////////////////////////////////////////////////////
apply plugin: ‘java’

subprojects{

//override the java plugin test task
test {
//runs tests across all sub-projects. good for CI build process.
ignoreFailures = true
//TestNG support in gradle java plugin
useTestNG()
}

}

//new task, outside of the subprojects task, which creates a top level directory for all of your tests in sub projects
task testReport(type: TestReport) {
//
destinationDir = file(“reports/test”)
subprojects {
reportOn { tasks.withType(Test) }
}

}
/////////////////////////////////////////////////////////////////////////////////////

ALM Jenkins Plugin

HP have released a Jenkins plugin for ALM, UFT, QTP and ST versions 11 and up.

This is a good move on HPs part – allowing automation to be run as part of a CI (or even a CD) process. I do think the outcome would be better for web/service testing as opposed to GUI testing, which does not typically support non-breaking changes like e.g. webservices do.

This does go some ways to increasing the value proposition for HP in the agile development process. But I do have to wonder if you are testing rest or webservices in conjunction with a web GUI would you even be looking at ALM, UFT or QTP as a solution?

 

Code Analysis for QTP Code

I’ve been interested in trying to bring some form of static code and language analysis to test automation for quite some time and have never found a suitable tool (vendor or open source) for HP QTP, which uses vbs  as it development language (for better or for worse).

Recently I found a tool (Test Design Studio), which support this out of the box. Much like the development community this allows automation teams to identify and eliminate technical debt (e.g. cyclometric complexity), while at the same time supporting targeted code reviews and provide training opportunities with the ultimate goal of producing stable and efficient automation code.  This also has the benefit of driving consistenancy of implementation and promotes mobility for team members.  I would also think that having this process in place would enable goverance around code quality for outsourcing or partnering with an external testing vendor.

Some examples of the built-in rules for langauge analysis:

  1. Promote use of the ‘Option Explicit’ statement to help enforce language rules
  2. Cannot make duplicate declarations of variables in the same scope
  3. Must use the various Exit statements in the proper context
  4. ‘Select Case’ constructs must have at least one ‘Case’ statement
  5. Use of ReportEvent for verification purposes is not allowed within the test
  6. The timeout for the ‘Exist’ method must be explicitly defined and should not exceed 10 seconds

The tool is extensible enough so new new rules can be added, which the vendor is very responsive on.  Maybe they will add a framework to do this?

A sample deployment would involve buying 1 license and running the tool across all your ALM qtp assets on a weekly basis.  Keeping track of the history will also allow you to identify trends and/or heatmaps.

I have not seen or heard if HP are thinking about this for QTP or UFT roadmap.

Link to vendor

The joy of JMeter Part 1

JMeter is an opensource testing tool that I have used recently and I felt like putting together some thoughts on its use and   some general performance testing guidelines on what I’ve seen to be successful. I’m going to avoid the opensource versus vendor tool discussion having successfully used HP, Borland, Parasoft tools and most recently JMeter.  Every organization and situation is different but the tool choice needs to take into account factors such as sdlc, release management process, technology stack, product (b2b/b2c) or internal app, team dynamics etc. For example, consider testing a mobile app as opposed to an internal trading system.

My experience lends itself to testing prior to production release but a growing trend in performance testing involves running performance testing in the production environment, leveraging a set of ‘cloud’ based assets. One company I like and have talked to is SOASTA, they a great group of people plus have developed a very impressive testing solution both from a GUI design and also feature set perspective.  This is a powerful and useful technique and hopefully I can get some exposure to this soon…

First off you need to plan, plan, plan. Really this is the most important aspect of any testing effort (and anything really worthwhile doing in life too…).   If you hear “well lets test until it breaks, that’s easy isn’t it?” I advise running for the hills! If that isn’t possible then getting everyone together and deciding on what the objectives are will really pay off when you are in the ‘throws of testing’ the release.

Here are what I see as some high-level discussion points for upcoming posts on this thread:

  • Performance Objectives
  • Test scenario and data design
  • Monitoring
  • Functional test while you go!
  • Know thy (technology) stack!
  • JMeter + plugins
  • JMeter sampler development
  • Test environment
  • Outsourcing your test lab
  • Running the tests
  • Compiling the run data
  • Reporting
  • Injecting performance testing to your sldc

Part 2 to follow on setting the performance objectives, which will drive the entire effort.

quality signals…

an interesting post on testing 2.0 from the folks over in google; using signals from various sources to predict code quality. I’ve been thinking about how to create a picture of composite quality from sources like sonar (static analysis) and testing (defect density, defects before and after release)

anyway, link to google testing blog below

http://googletesting.blogspot.com/2012/08/testing-20.html