Interjection: Coding for QA

by jesse in ,


Some random thoughts that have been brewing in my head since flying in on Wednesday around how to build out a logical series of tests, and libraries within the Quality Assurance space. (Sorry, this is a bit of a stream of consciousness post) First, let's cover the assumptions you need to make:

  • All tests, and test cases, will be eventually deprecated
  • Every Test has some value (even deprecated tests)
  • Having a clear strategy for deprecating tests and then migrating those tests into a bin for "functional regression tests" is key

Starting here, and thinking about it, you have to make some logical parallels, the core of which is "if each test case's value eventually reaches slightly more than 0, but future test cases must build on those test cases, modularity and disposability is key".

That's not anything ground breaking, in and of itself - however, how you actually implement this is key. What I see Perl developers commonly do (and most other people) is to isolate core behaviors within a core series of libraries. However, I regularly see little modularity within those libraries.

You must approach this from the standpoint of each test case is a series of Atomic "stand alone" actions keyed into a specific sequence resulting in a test case. If this is the case - shared libraries for core actions are key. However, isolation and modularity of these libraries is key.

If you approach this correctly, your tests become ultimately disposable. Say you have a test case like this:

  • write 5k files
  • read 4.5k files
  • delete 3.5 files
  • Hash remainder files

If you isolate the write, read, delete and hash actions into a series of modular libraries, the actual implemented test case becomes a simple wrapper around these actions. Breaking up the libraries into logically sorted libraries means that you can easily swap out/around those libraries out from underneath the tests (commonly, a result of refactoring).

Mentioning refactoring brings up an interesting side point - IMHO, QA test code, and libraries should be refactored constantly, and mercilessly. They commonly need to cover the width and breadth of the PuT (product under test), and products grow and change. From what I have seen, you commonly need 2x the core products code base to effectively automate the testing of the PuT, and this means you have a huge code base (and big code bases move slow) but with 2x the code, you have to move 2x the speed of the product just to keep up with new features, deprecated features, or new actions.

I should refactor this post later - but in essence, I am simply acting as a cheerleader for QA to adapt highly Agile and RAD methods of developing tests and tools. If you have >85% automation (as my current company does) you have to be fast to recode/tool those tests, libraries and tools, as well as developing new ones.

You must modularize, you must refactor, and you must deprecate. Just like a developer. QA is a business that has multiple masters. You must act as a developer, and think like one when writing test code. But you also have to leave your QA hat one.

Ironically, large code bases within QA means you should probably think about writing tests for your tests. Oh, the fun.