• Jobs
  • About
  • A successful QA team March 24, 2010

    Why assess team success

    Its pretty obvious that making the teams performant goes a long way to make business profitable. Most companies focus on reviewing performance of their employees. This is a great for so many reasons, however, if that is all you do, its a very myopic view of performance. The focus with this in general is to assess individual employees performance and find ways to make them more successful. However this process does not focus on assessing performance of each team and doesn’t do enough to make teams successful in achieving their goals. If team success assessment were to be done for the first time, you would see patterns where

    1. top individual performers tend to be clustered in certain successful teams
    2. the bar for top performers in successful teams are higher than the bar for teams that are struggling

    If this situation exists in organizations and nothing is done about it, you will see that successful teams continue to be successful while unsuccessful teams continue to under perform. If this under-performance occurs in a team that is small and/ or it makes a very small impact to the overall business, then a potentially rewarding business initiative is eventually scrapped with the assumption that it was a bad idea. However, if the team makes a significant impact to the bottom line, then the under-performance is eventually recognized. However it still isn’t detected as the root cause for a very long time – usually until there is an upper management change. This is because the managers do not realize that they are lowering the bar for their direct reports and look for other reasons for their lack of success.

    If there were to be a team performance assessment done across the organization, then the managers of under-performing teams would actually recognize the problem and potentially do something to fix it.

    How to assess team performance

    Qualitative assessment

    As you read the Quantitative measurements section later, you will realize that the cost and complexity of doing Quantitative measurements is usually very high. Because of this, it is best to first get a general feel for team performance by considering:

    1. the perception of the consumers of a team’s output, whether its a product or a service, met their standards.
    2. It these standards were adequate, and if it was not increasing over time, it was atleast consistent release over release.
    3. The team was not burnt out.
    4. The team members felt their personal and development needs were met.

    If these indicators are encouraging then its quite likely that most of the teams are performing well and the organization as a whole can be considered successful.

    If however these indicators seem to highlight a problem, then you need to seriously consider that one or more teams are under-performing … even if focal reviews don’t seem to indicate a problem. Once this is acknowledged, organizations then usually consider Quantitative performance measurement.

    As for the frequency, this assessment generally speaking should be done annually (once every year).

    Quantitative team performance measurement

    When management attempts to measure a QA team’s performance quantitatively, the most common metrics they use are:

    1. Number of test cases executed per person per release
      The idea here is that if the number of test cases executed per person in the team is higher, the efficiency of the team is higher. This is a fairly simple concept, however it works only if the duration between releases is fixed, the test cases are designed/ defined consistently, and the type of tests (web based, API, mobile etc) are the same. As you can guess this metric probably wont work to compare performance of different teams in a company though but you may have more luck trending a single team’s performance over time.
    2. Dev to QA ratio
      The number of developers for every QA in a team is one metric that is most relevant during times of rapid growth, downsizing or reorgs. These numbers can be used for generalizing how well the teams are balanced with respect to the industry. For instance for teams that focus on browser based testing, the ratio should be around 2.5:1 to 3:1. If it turns out that the ratio in reality is say 1.5:1, this could be anĀ  indication of lack of performance.
      On one hand you must understand though that this ratio is very subjective. Keep in mind that:

      • it should be not be taken to the extreme. A team with 2.25:1 dev to QA ratio is not necessarily more lean than a team with the 2.5:1 ratio. A small difference like that can be attributed to something non-performance related like attrition, product complexity, or even simply team size. For instance a 7 team member team with 5 devs and 2 QA cannot prove themselves to be “leaner” than a 13 members with 9 dev and 4 QA using this metric.
      • a team that focuses on browser or GUI testing cannot be compared with a mobile device testing team or backend testing team for instance. The complexity of execution of tests does vary by interface they have to be run against.

      On the other hand, if the ratio in the team that is being assessed is very different from the industry standard, don’t simply discard this metric if you hear the “but what we do is unique” defense argument. More often than not when this defense argument is suggested the teams actually don’t do something unique enough to suggest the disparity.

    3. Number of bugs found per person, or team, per release
      This metric should be used to measure deviation in the number of bugs found between people on the same project or for the same team over multiple projects. Then by understanding the root cause for these deviations, we can reach conclusions around how to estimate projects better, who are the right people to put on which projects, what training is required, what changes in the process would result in lesser bugs that are duplicates or not-a-bugs etc.
      The common pitfalls with this metric is that more often than not the root cause of the deviations are not well understood and when people’s performance is assumed to be the cause. This then ends up causing competition between individuals on who can file the most bugs; which then leads to more overhead for triaging duplicates etc and lesser communication between team members.
    4. Percentage of automated test cases executed per release
      The idea here is that the more automated your tests are, the lesser man-power is needed to run the tests making your team more efficient. Even though this is widely successful measure, the things to keep in mind are:

      • The automation solution should be fast … the time it takes to run the automated tests should be faster than if the humans were to do it. This is not as much a no-brainer as you might think!
      • The tests need to be robust. If a large number of tests fail because of script failures and not because of product bugs, then a lot of time needs to be spent analyzing the results and fixing the scripts.
      • The granularity of the tests that are manual and automated should be the same. In a very competitive environment, just to bump up the numbers, the automated tests are described with a lot of granularity (extreme case would be that every assertion being tracked as a test case) than the manual tests (extreme case would be where long end to end use cases being tracked as a test case).
    5. This would be the most common and talked about metric in QA except for the fact that people often talk more about percentage of automated tests and not percentage of automated tests executed per release or phase of product development. There have been several instances when tests are automated but a large portion of them aren’t executed. This often is the case where either the people who automate the tests are not responsible for running them and/ or where their performance is assessed simply based on how many automated test cases they automate.

    6. Percentage of bugs found in production vs pre-production
      At the end of the day, this is the metric that the business cares about the most. The idea here is that we should discover as many bugs as possible before its pushed to production. Fixing bugs in production always cost the company significantly more than the ones caught before. The cost, is not just the man-power needed to push a fix, but also things like penalties due to breach of SLAs, lower customer satisfaction resulting in fewer referencible customers etc.
      So even though tracking this metric is very important for every company, the data needs to be taken in a post-mortem like forum to understand what caused the issues. What we need to know is how many of these slips QA is really accountable for, as opposed to business setting inherently risky milestones, lack of resources committed to the project, third party dependencies etc.

    Even if you measure the metrics correctly (which can be daunting in itself) if look at any single metric in isolation, there will always be some shortcoming which would make it an unreliable measure of success of teams. By combining several of them though, the reliability increases.

    In closing

    Most organizations at one point or the other consider assessing QA team’s performance. Even though I advocate this pratice, I do not recommend being overzealous in our pursuit of excellence by taking quantitative performance measurement too far. A qualitative assessment should be sufficient to detect if a team is successful or not. If the team is not successful, quantitative assessment can help with showing trends release over release, however, getting the quantitative assessment right is and if you can’t get it right, don’t despair. As mentioned above, usually we detect the lack of team’s success so late that fixing it gets really hard. So, instead of spending too much energy around trying to make the quantitative measurement work for your company, focus your energy on actually taking steps to help the team become more successful.

    Posted by Rahul Poonekar in : Concepts

    Leave a Reply

    Your email address will not be published. Required fields are marked *