Breaking down an automated test

In the context of a GUI functional automation here what I think an automated test is made up of:

  1. Application of test data
  2. GUI recognition
  3. Driving code
  4. Assertions
  5. Error and recovery
  6. Reporting

These concepts are applicable for other types of test automation too e.g. unit testing

Other automation features:

  1. Setup and teardown activities
  2. Function libraries

Test Data

Data driven testing has been around a long time and is a valuable technique to iterate over a large number of data permutations. For example, placing orders in an order capture application to cover various asset types and order attributes like buy/sell, sett date, price etc. I’ve seen data stored in many ways e.g. xls, cvs, xml and also in databases. I would usually go for a well-structured and labeled spreadsheet as this allows other people to easily create and maintain the data over time. For example, a business analyst could partner with the automation engineer to own the data input part of the automation.

GUI Recognition

The automation tool needs to know how to interact with the application under test and of course each automation tool implements this differently, whether it comes from a vendor or opensource. What I look for in a tool is if it has a centralized repository (maintenance), allows you to identify objects (think object spy), defined process around maintenance (GUIs change a lot!), and supports base lining against GUI versions (argh we need to test the rolled-back version!).

Driving Code

This is where the main part of the coding occurs for the test. For a GUI test this involves writing code to manipulate the GUI to perform actions, apply data, apply assertions and reporting.

Assertions

A term borrowed from unit testing but I like and is the whole point of a “test”. GUI automation falls under the banner of blackbox testing, which tests system behavior by applying defined inputs and comparing outputs to expected values. Assertions include comparing data values, data types, gui properties etc. How assertions record the failure should be considered too and be specific to flag it as a failed assertion and not a test code issue for example.

Other types of assertions/tests you get for free in GUI testing. If a GUI workflow has un-expectedly changed the test will fail and you might have found a bug.

Error recovery

You have created a test run of 100 tests and have left for lunch; you come back and find the test run failed on the first test. Enjoy your lunch? Well in order to relax and really enjoy lunch make sure that your automation is able to move onto the next test in the sequence after suffering some type of failure, which run the gamut from hung browsers, unresponsive java apps, automation bugs (yes we can have bugs in our code too!).
We refer to this as recovery; with careful attention automation gets more robust the more it gets run. And remember to not hide/swallow any failures which could be bugs.

Reporting

Everyone is interested in seeing the failed tests as that is an indicator of the application not operating as expected. Automation reporting should highlight failed tests, provide supporting evidence to give to the developer (because the test must be wrong, not my code 😉 like screenshots of error messages/videos/stack traces and allow drill down into the test step hierarchy.

Reusable Libraries

Reusing code is a fairly well known and (basic) development best practice and is aimed at allowing less experienced testers wire together automated tests without having to re-invent the wheel. The common libraries can be owned by a central automation team; depends on your organizations structure. Automation reusable code usually comes in two flavors: common and application level; and of course everyone has different terms!

Common code captures code for accessing data sources, data transformations, date handling, data-driving support etc. Application common code captures common workflows for an application or family of applications. Think of someone trying to develop a test from scratch and also having to create automation for a downstream application. Reusable code reduces the complexity of developing/programming and empowers a wider range of people to develop automated tests; I would caution though to also provide training and guidelines if you are trying this.  And I would emphasis a good level of documentation, usage guidelines, cookbooks, template tests etc

A specialized form of reusable code is an application invoke or launch, which is a call to handle starting an application and can perform some basic checks and trigger recovery if necessary.

Thats it for a quick overview of automation and I wanted to end with two thoughts on what I have found automation to be successful :

  • Use automation in the context of a defined set of manual tests as they serve to set the goals, data, type of assertions etc for the automation. Plus, you can hand them over to someone else to implement!
  • Strive to design automated tests to be independent from each other; of course there are situations where tests need to be stood up in a certain state. This is particularly helpful when you are running tests concurrently on a large number of machines.

Cheers!

Leave a comment