An itch: Decent test case tracking/registration

by jesse in , ,


Something that's been a thorn in my side for some time has been a distinct lack of open, flexible and rational test case tracking tools. Many testing groups simply use spreadsheets to track thing (this doesn't scale) and still others use tools that are proprietary/don't handle automated tests/are overly static. Driving into work today/in the shower (where all good ideas come from) I got the itch again to "solve" this. The idea being to create a tool which:

  • Is open source. Ideally, it's built with as many from-the-community tools as possible, and integrates with environments/tools like py.test, nose, trac, roundup, etc.
  • Uses open/standard APIs for management. This is a key point for me, given the fact that I am primarily concerned with tracking automated test cases. I do not want to have to go an manually enter a new automated unit/functional test: I want to write something like a nose plugin which builds on the spec plugin by Titus brown to "register" a new test with the system. Manual test cases can still be managed via a decent UI - or better yet, uploaded/managed via CSVs or the uploading of "test scripts". A script being something which defines the test tags (areas), attributes (regression, etc) and the steps to achieve the test - all of this can be expressed in pseudo-code and pulled in. Automated tests get the same treatment via the doc strings/etc. I also want the APIs to be cross-language - there is no reason in this day and age to lock into one language.
  • Is simple - this is critical. I want something easy for users to customize for various workflows, and I don't want a painful deployment. I want all data stored in an "open" format so that if users want - they can easily swap the system out.
  • Allows users to swap around "pluggable" components to change behavior ala Pinax - think a pluggable CMS system for tracking test data.
  • Allows for visualization ala junit output and charting.
  • Easy to integrate into a "test execution manager" - the program running automated tests should be able to easily grok the information from the tracking/management app to decide "what tests to run".

I know there are tracking tools out there: I've used most of them, and I still have yet to find one that meets these criteria. They either try to force you into a single paradigm for test execution, or they don't support something as simple as a remote API. And most of the "big company" tools are highly proprietary.

Something which allows for automated unit/functional/regression tests to auto-register is key. Manually tracking and input of automated tests - especially when you start having hundreds of automated tests being developed sucks.

Not to mention - if you're the project manager - you want to know about all of the tests in the system - you don't want to have to go slogging through code, nor do you want to have check multiple data sources for information about what you have. You need a centralized repository.

Finally - yes, a more critical number when thinking about tests is the amount of coverage on the product. But when you start writing more macro functional tests, coverage is a hard thing to measure. You have to constantly consider refactoring/deleting old tests that may not have as much worth as the same test+a lot of other activity. Without some centralized place to track all of your tests (automated and manual) and the actions they perform it's hard to start that process. Wouldn't it be nice to be able to query all of your tests to find the ones with the common attribute "writes file to x" and then be able to deprecate the tests which just do that, but keep the ones that perform that action (which is a simple one) in the context of a larger test? This helps prevent regression test bloat: You always want better tests - but you don't want massive amounts of redundant/overly simplistic tests.

Thinking about it - it wouldn't be hard to develop and application in Django which accomplished most of these goals. The hard part (at least initially) is designing the initial database layout. With newforms, you can easily prototype a system which brings the "spreadsheet" paradigm into a web UI to get started for manual entry. A nose plugin to call an XMLRPC or other API is also insanely simple to implement.

I have a lot itches it seems. I need a cream or something.

ps: Yes, I know about the TestCaseManagement plugin for trac - I like the fact it really just front ends subversion, and allows for test case versioning. It would be entirely possible to have something which "specs" the automated tests and generates the XML, or to extend the plugin to parse the data it wants in standalone XML from automated tests. The problem is that you don't "assign" automated tests to people. The plugin is more aimed at manual tracking/execution.