• Jobs
  • About
  • Test Automation Principles July 19, 2011

    Several years ago, when I started with Test Automation, I hadn’t read, or wasn’t even aware  that there were any, principles of Test Automation. Initially I couldn’t get the tests to run reliably release over release. With a small number of tests it wasn’t too bad. But as the tests grew I would pretty much baby sit the execution. I would watch the execution and as soon as it would fail I would start investigating that failure. Once the problem was determined, whether it was a product bug or test issue, I would resume running the tests from that point onwards. I also would find myself running the tests in a certain order and worried about removing a test because something else could fail as a result of that. It didn’t feel very fulfilling even though I had my tests “automated”.

    Over time I found that building tests in a certain way made the tests more reliable. I also started discussing with others what they did to keep their test suite robust to validate what I had learnt the hard way. Unfortunately searching for test automation principles on the internet wasn’t as useful as I hoped. Some of those principles, like “minimize untestable code” seemed like it was a product development principle as opposed to a test automation principle. Some others like “verify one condition per test” are good for unit testing but impractical for functional/ system tests. I also found some key principles missing in some of those lists. So, I am taking a stab here at describing what I think should be the Principles of Test Automation.

    Choose the right type of test automation methodology

    I broadly categorize automated testing as Unit, Component and System testing.

    Unit testing

    Unit testing is validation of the smallest part of the application. This smallest part in procedural languages is a function and in object oriented languages is a method in a class.

    Since the focus of the unit test is very small, these tests can cover the entire code base, even if it’s not exercised by user flows. Also, they need the least amount of setup and teardown, so fewer things can go wrong … making them the least brittle. A combination of fewest assertions and least setup and teardown, make them the fastest as well. For these reasons, you should strive to have as much coverage as possible using unit tests.

    There are several unit testing frameworks. In Java for instance, the commonly used ones are JUnit and TestNG. What causes confusion sometimes is that people use these frameworks to write tests that test a lot more than a function or a method. For example you could write a UI test using JUnit. That does not mean this is a unit test though. What advantages I described earlier about unit testing pertains to those tests that validate whether a function or method works correctly.

    Component testing

    When there are several components in the system, testing each component in isolation is called component testing. Most people are aware of the design practice of separating the Model View and Controller in the system. If you were to test the entire system integrated together, you would not be able to test capabilities in the Controller, for instance, that the Model and View weren’t utilizing. So, to test the Controller thoroughly, you would have to write component tests without Model and View.

    The problem with writing tests against a component is that you need some system to provide the input to the component and/ or consume and validate the output of the component. In the real world, that would be what other dependent components would do. But for component testing, you have to add mocks or simulators for these dependent components that would mimic their expected behavior. The other big benefit of this is that it removes the possibility of failures due to bugs in those components.

    As you can tell, component testing gives you more coverage of various execution paths than functional testing but not as much statement level coverage as unit testing.

    System or Functional testing

    The purpose of System testing, also called as Functional testing, is that it tests the entire system interacting with each other from an end user’s perspective.

    When designing System tests and frameworks to execute system tests, you should be thinking black box testing. If the end user never interacts with the database the application uses for instance, ideally the system tests should not be validating data in the database. It should validate what the users see. This prevents the possibility of a false positive when the internals of the application is undergoing change even when the end user shouldn’t notice a change.

    When running reliably, these tests tend to have the highest ROI. Especially so when you are running the use cases that bring the highest value to the end user or generates revenue for the application/ service. The challenge though is to keep Functional tests reliable. Besides being brittle, they are slow, making it hard to scale these tests. So the cost of running these tests are also high. For these reasons it’s best to begin automating the high value flows, and as you add more tests over time, understand what the point of diminishing returns is for these types of tests.

    Keep the tests short

    Most of the times people write a unit test that tests one function. But frequently people forget that a unit test should contain a single assertion. They add multiple assertions for a single function, each time passing different input data combinations to the function under test. In this case when any assertion fails, it does not run the rest of the assertions and so the results don’t provide the granularity into all possible failures that exist.

    For the same reason, keep the number of assertions as less as possible in component and system tests as well. Also, the longer the test, the more brittle it gets. The main reason why people tend to write long tests is because they want to cover an entire use case. Also they think while we are at this step, we can validate a few other things you normally would with manual testing. Don’t fall for these temptations and break your scripts into multiple tests with as few steps as possible in each of them.

    Keep tests independent

    No test should be dependent on another.

    This isn’t very hard to follow but is one of the most common mistakes people make. When a use case has several steps, often people write every step as a separate test and then make the tests dependent on each other. The problem with this approach is ofcourse that you need to cannot run any one test in isolation, or you can’t easily choose a category of tests based on which area of the product is changing. Also, you need to track dependencies between them. One of the ways, I have seen people achieve time and time again, is by simply naming them in alphabetical order. This then leads to several problems when the workflow changes and the tests have to be reworked, or if your tests scale and you realize you can’t easily run the tests concurrently.

    The right solution is to ensure that each test is independent and validates one step. Several times you can avoid the previous steps in the workflow by setting up the dependent data in the database baseline. When that is not possible, you can make them part of the setup of the test.

    If you make it part of the setup, I prefer that the reports don’t show them as test failures. If I can’t show them as “not run” for any reason, I prefer to not have them in the report at all. This way you can avoid analysis paralysis and steps that really have failures are the only ones that are investigated.

    Tests should be idempotent

    If the tests can be executed over and over again and with the same results, they are considered idempotent. If the tests aren’t designed to be idempotent, then you have to take a lot of measures to ensure they are executed just once and then the test environment is rebuilt. This makes it cumbersome to reproduce failures while debugging and can lead to undesired inefficiency and/or inflexibility in the way tests are executed.

    Tests should be deterministic

    Having indeterminacy in tests undermines any test automation effort because no body trusts the results. One of the easy ways to tell if a test is indeterminant is if it fails because of a false positive. A false positive is when a test case fails not because of a bug in the System Under Test (SUT) or Application Under Test (AUT) but because of a problem in the test or the framework/ tools it is built upon.

    What most people don’t realize is that tests that never fail dues to false positives can also be indeterminant. Imagine the AUT is an application that allows you to make purchases using credit cards or reward points. If the tests were written in such a way that you would use the reward points on a test credit card to make a purchase if you had enough points and use credit if not, the test wouldn’t fail because one or the other scenario would work. But, you wouldn’t know for sure if both the code paths were covered. If you had a million points before running the suite, it could be that every test ran using the rewards code path and it would fail to catch potential bugs in the credit code path. A determinant test on the other hand would always test one code path or the other no matter how many times you ran the test.

    Minimize incidental test coverage

    Incidental test coverage means that you are exercising code of the SUT that you aren’t intending to test. This happens when you essentially cover code by performing necessary steps to reach the validation point (assertion) that you really want to test. If you use code coverage tools, this gives you a false feeling of confidence that you have good coverage. In such situations, you should add tests for those areas of your product that you are covering as part of setup or teardown steps.

    Posted by Rahul Poonekar in : Concepts

    7 responses to “Test Automation Principles”

    1. Willy says:

      Hi Rahul,
      I don’t understand the part of “Keep test independent”. In the beginning you wrote:
      “When a use case has several steps, often people write every step as a separate test and then make the tests dependent on each other”
      I understand that it is wrong to break down each step of a use case in several tests because that will make test dependent on each other.

      But later in the same topic you wrote:
      “The right solution is to ensure that each test is independent and validates one step.”
      Perhaps I’m not interpreting it correctly.
      Thanks in advanced.

    2. Rahul Poonekar says:

      That is a great question Willy. Let us take a ficticious use case:
      create new admin user -> login using new admin user -> validated admin user sees “Add content” portlet

      For this use case when I said “often people write every step as a separate test and then make the tests dependent on each other”, I meant they would create tests like this:
      test1 : create new admin user (through the UI)
      test 2 dependent on test 1: login using new admin user (that was created in test 1)
      test 3 dependent on test 2: validated admin user sees “Add content” portlet (considering we already are in the dashboard after logging in in test 2)

      When I say, “The right solution is to ensure that each test is independent and validates one step.”, I mean the tests should be created as:
      test 1: create new admin user (through the UI)
      test 2: login using new admin user (However, the user used here is not the one created in test 1. the user could be part of the test data baseline. The option is to use the most efficient way to create the user, like maybe making a REST call, in the setup of the test. Ofcourse with this option you need to add another test to ensure the user created using the most efficient way, i.e. REST call, is also tested as separate test case.)
      test 3: validated admin user sees “Add content” portlet (Here again the setup of the test would include logging in. So, we may end up doing exactly what test 2 did, but it would be part of the setup.)

      As mentioned in the post, this not only allows you to run any test, but when you see the test results report, you would know exactly which test scenarios fail. For instance if only the new user creation through the UI is broken, in the first scenario all 3 tests would fail, whereas in the second case, only test 1 would show up as a failure.

    3. Willy says:

      These principle have been enlightening for me. I will continue reading the rest of the website. Thanks for your time Rahul.

    4. Mark says:

      Hi Rahul,

      I don’t agree with the way you suggest implementing the ficticious use case:
      create new admin user -> login using new admin user -> validated admin user sees “Add content” portlet

      Your “test 2” is not validating “login using new admin user”, rather it is validating “login as an existing user”.

      I would firstly question the use case: Should the third step be “validated admin user sees “Add content” portlet” or “validated NEW admin user sees “Add content” portlet”?

      If the former, it doesnt explicitly state you need to use the new user… so i would use an existing user i know should work (perhaps a user database containing this user gets attached to the AUT during this test’s setup step).

      Test 1:
      Setup step: force user database in a known state (database shouldnt contain the admin user your entering in this test)Step 1: Create new admin user (through the UI)
      Step 2: Login using new admin user
      Step 3: Validate admin user sees “Add content” portlet

      If the latter:
      Test 1:
      Setup step: force user database in a known state (database shouldnt contain the admin user your entering in this test)
      Step 1: Create new admin user (through the UI)
      Step 2: Login using new admin user

      Test 2:
      Setup step: force user database in a known state (an existing admin user which is previously known to work is in this database)
      Step 1: Login using pre existing admin user
      Step 2: Validate admin user sees “Add content” portlet

    5. Hanson says:

      what is the possible advantage that automation have over the traditional method.

    6. Tim says:

      Depends on your situation, and how easy it is to automate the tests, but you get a reliably consistent check of routine (boring) functionality that can be run much more frequently and cheaply than manual testing – catching bugs earlier and reducing the cost of fixing them – thereby freeing up the manual testers to test the harder parts of the system – which probably need more focused testing – and to test more of the system in ways that machines wouldn’t: humans often get hunches about things that might break. Humans can perform ‘exploratory testing’ and give UI feedback, for example: machines can’t.

    7. Usama Nazir says:

      Hi Rahul!

      You have done a great job, These principle have been enlightening for me, you have opened a new horizons of test automation for me

      Thank you

    Leave a Reply

    Your email address will not be published. Required fields are marked *