A nice quote:

by jesse in ,


One machine can do the work of fifty ordinary men. No machine can do the work of one extraordinary man. -Elbert Hubbard

I find this to be very, very true when I talk to people about 100% automation goals, and outsourcing of iterative/isolated tasks.

People always respond "automation/outsourcing will take my job!". My reply is always the same - no, it won't, not if you alter the collateral/value you generate. People have the capacity to grow, to change and to learn. The automation of repeatable steps, and the outsourcing of isolated projects (generally, iterative and simple) frees people to excel in their area, to become domain specialists, and to generally "become better".

Now, does it always work this way? No. Large corporations are notorious for automating/outsourcing and then canning a large swath of people. Like all ideas, and like all tools (and automation/outsourcing is a tool), the idea can be abused.

I am lucky enough to work for a company that is small, agile and understands these things. Both the benefits and the pitfalls of theses ideas.

Just a random thought.


Interjection: Coding for QA

by jesse in ,


Some random thoughts that have been brewing in my head since flying in on Wednesday around how to build out a logical series of tests, and libraries within the Quality Assurance space. (Sorry, this is a bit of a stream of consciousness post) First, let's cover the assumptions you need to make:

  • All tests, and test cases, will be eventually deprecated
  • Every Test has some value (even deprecated tests)
  • Having a clear strategy for deprecating tests and then migrating those tests into a bin for "functional regression tests" is key

Starting here, and thinking about it, you have to make some logical parallels, the core of which is "if each test case's value eventually reaches slightly more than 0, but future test cases must build on those test cases, modularity and disposability is key".

That's not anything ground breaking, in and of itself - however, how you actually implement this is key. What I see Perl developers commonly do (and most other people) is to isolate core behaviors within a core series of libraries. However, I regularly see little modularity within those libraries.

You must approach this from the standpoint of each test case is a series of Atomic "stand alone" actions keyed into a specific sequence resulting in a test case. If this is the case - shared libraries for core actions are key. However, isolation and modularity of these libraries is key.

If you approach this correctly, your tests become ultimately disposable. Say you have a test case like this:

  • write 5k files
  • read 4.5k files
  • delete 3.5 files
  • Hash remainder files

If you isolate the write, read, delete and hash actions into a series of modular libraries, the actual implemented test case becomes a simple wrapper around these actions. Breaking up the libraries into logically sorted libraries means that you can easily swap out/around those libraries out from underneath the tests (commonly, a result of refactoring).

Mentioning refactoring brings up an interesting side point - IMHO, QA test code, and libraries should be refactored constantly, and mercilessly. They commonly need to cover the width and breadth of the PuT (product under test), and products grow and change. From what I have seen, you commonly need 2x the core products code base to effectively automate the testing of the PuT, and this means you have a huge code base (and big code bases move slow) but with 2x the code, you have to move 2x the speed of the product just to keep up with new features, deprecated features, or new actions.

I should refactor this post later - but in essence, I am simply acting as a cheerleader for QA to adapt highly Agile and RAD methods of developing tests and tools. If you have >85% automation (as my current company does) you have to be fast to recode/tool those tests, libraries and tools, as well as developing new ones.

You must modularize, you must refactor, and you must deprecate. Just like a developer. QA is a business that has multiple masters. You must act as a developer, and think like one when writing test code. But you also have to leave your QA hat one.

Ironically, large code bases within QA means you should probably think about writing tests for your tests. Oh, the fun.


The SoftwareDev/QA Prize fight

by jesse in ,


Ah, the grind. I took a few weeks off - Xmas and New Years break for me. Probably the longest amount of time off I've taken off from work/coding/tech in awhile. Part of it was holidays, but I think a part of it is the Burn.

We've all felt the burn. It's that feeling you get when you open the same code base/test code/bug list for the millionth time in six months, and you're staring at the same chunk of code, or running the same test and you have to mentally gag on it.

I'm young, so I tend to get it in my head that I'm immune to the burn. Lately, I've started thinking of releases as a Prize Fight - a boxing match.

You start off round one - you begin feeling out your opponent, after all - things just started and you have to gauge what's going to be coming down the pipe in the next few rounds. You throw a few punches, you do your research. Things don't really start until the end of the round.

Rounds 2-4 tend to get a bit more dirty. By now you've figured out the easy parts of your opponent, and you start throwing your known punches. Things start hurting, you hit, he hits, you hit, he hits. You're both having the crap beaten out of you, but you know there's a light at the end of the tunnel. It's dance. Punch, jab, jab, body shot, uppercut.

Then things start to wear on. You're tired, and you make mistakes, you get bloodied, but you keep going. By now, you're not innovating - you're maintaining. It's important to maintain the status quo as you progress - you have to throw the same punches over and over and over in hopes your opponent falls.

Later, you fall back on instinct. By now, you're ribs are aching, possibly broken and your face looks like you got hit by a truck. But you're running on empty. You're running on pure instinct, closing down the same punches you know are coming, and trying to find any gap that will win you the fight.

Then, it goes into overtime.

In the software world, this is a slip. A schedule slip due to new features, or late ones. It's the fact the product doesn't withstand your continued punches - the same ones you have thrown for the past months, but the product simply breaks, or worse yet - breaks in new ways under the old ways.

The slip takes the most out of you. By now you just want to lay down - you want to work on something else, or you want to just kick it out the door no matter what. In a fight, you see desperation - why won't he just let his guard down for one minute so I can slip in there and knock him out?!

The slip, I think, causes the most burn. It's the repetition and continued perceived failure in the eyes of others that gets under your skin. Eventually, one of you will fall - generally, it's your opponent. Whether you chop features or back off of an aggressive release, one of you goes down.

Fighters recuperate. They take a break, they see a doctor, they mend themselves for the next fight. Sometimes they fight with fractured skulls, broken hands or broken ribs - but they're a lull before the next fight. They keep themselves fit and moving.

In software, especially in QA, sometimes - nay, most of the time - you don't get the lull after the fight. You have to go, broken and bloodied to the next fight. Even if it's maintaining and supporting a previous release - development has moved on. QA always lags behind in a structured environment, the job is often more intensive and trying then that of development, and generally slower.

You have to move to the next fight, still tired and still hurt. You win that fight. You move onto the next, you win that one - but by now, you're arms feel like lead and you aren't seeing so well.

And so on.

Don't think I am saying this about my current job per-se - I'm not. Just because I'm feeling the burn doesn't mean my employer burned me, or they don't give me rest. In my personal case, I'm the one driving myself from fight to fight to fight.

I just think that in the case of larger, structured development teams, this is the proper analogy. Some companies do better, some do much, much worse. But I still think it boils down to the fight.

QA people are generally not as advanced as developers - and frequently companies treat QA as a novel idea, but not core to the business - this makes the analogy above much more true, and much worse in many respects.

Luckily, although I am the one driving myself from fight to fight, my employer cares about QA quite a bit. They dedicate resources and time, but that doesn't stop the fact we have to keep software going out the door as fast as we can make it, it's a startup - you have to keep running.

I'm tired, and it's late in the fight. This is the 6th of the night and I want to go home but I am not going to give up, and I am not going to stop fighting, fighting is fun. Software, tech and development, is fun.

Fun, interest and learning can let us drive ourselves past our breaking point - but there's always a cliff sitting there - waiting for you to realize that you just went past the point of safety. You have to watch out for that cliff, you have to know your boundaries and most of all, you have to know your opponent.

Edit: Let me add - one thing Boxers, runners, developers, etc all do, which I have not done to date, is to find a pace. When you ramp up, you can't blast yourself, running at full speed right out of the gate, you'll hit the wall, hard and fast. Everyone needs to find a pace, including me.


Random Week Roundup (QA, Python)

by jesse in , , ,


Man. Have you ever had one of those weeks that makes you feel like your brain just got backed over by a semi-truck? That's what this week was for me. Between seeming juggling angry bears, everyone around me seems to have come down with the creeping crud. Of some interest, I've discovered a series of QA-Related blogs (the resulting reading of these makes me question my own methodologies) - including a QA Podcast (in one episode they cover Selenium). Here's that roundup:

Collaborative Software Testing James Bach's Blog Agile Testing Blog QA PodCast

Most of these concentrate on the concept of Agile Testing, and basic to advanced methodology. I know they made me think about various approaches. Overall, I think a lot of what's said inside the community is applicable to most software development/QA groups - but I strongly disagree with some of the assumptions (but I'll cover some of that in later posts).

Good Info!

I'm hoping to be able to cycle back into PyHack mode next week - I haven't been able to make progress on any of my python projects (both internal and external), so my ability to talk about anything new or interesting is greatly diminished.

One general thing I have been Navel-Gazing about is this:

My company (and a few others I know about) have been looking for solid Python people - QA people who can program in python (for tools development, test automation) and in general competent "Python Hackers". After scouring the internet, and posting everywhere we can think of, it looks like a employee-market. The number of skilled Python people is either incredibly low, or those who are skilled, are not looking (or not looking for something outside of core development).

Overall, it's sort of depressing (especially as I need the help!) but then again - I realize the traditional stigma of the QA role (and how do scream "it's not like that here" loud enough) and I also realize that QA is considered a stepping stone position for many on the path to development.

Like I said - Navel Gazing. I just want to find good QA Engineers who can also write code. I know they exist, dangit.