Sunday, January 3, 2010

How well should testers know the customer?

In an ideal world, our whole team (all of those people involved in creating a product) would all be on the same page, and the team would include customers. The customers on the team would be able to tell us immediately when something doesn't make sense from a domain context, and we would not waste any time on the things that don't make sense.

In my experiences, few teams are ideal. So let's assume that you don't have the cell phone number of your customer, *and* let's assume that much of the development team is not intimate with the deepest details of the domain of their product. Are there ways to minimize waste, still? Sure there are ... Here is a story I tell people a lot and I figured I would share it here.

I worked in a job once where I was in charge of running the QA team of a small start-up. Their software focused on the bioinformatics researcher. We made heavy use of statistical algorithms for genetic comparison -- sometimes pairwise searching, sometimes aligning, sometimes tracking experimental data over time. Our software did a lot of visualization of this data, so that these researchers could easily track down the information they were looking for to guide their next step in research.

The company was small enough that it didn't really have anyone who represented a BA kind of role. We had a "Project Manager", and "Software Leads", but QA really ended up representing the customer in most things. At times, we ended up actually being customer support as well. I had noticed while we were growing that the new developers lacked domain knowledge, and many times, when we would submit bugs ("These results can't possibly be correct"), they had no idea what was incorrect because they didn't immediately see *why* it was incorrect.

So one of the things I did to combat this issue was train my QA team in the depths of domain knowledge. We learned how to perform the algorithms BY HAND. We split the algorithms between us, and each became an "expert" in a group of these algorithms. We spent a lot of time learning the command-lines and working through examples manually.

As a result, I believe that my little circle of influence was pretty efficient. The testers could look at a visualization chart and immediately flag the parts that were problematic. We could understand what the customer was talking about when they came to us and reported an issue. We could even ask them the necessary questions to pin down the source of the issue (most of the time!).

Since that time, I've worked on a few other teams where many people on the team (dev, tester, etc) did not have depth of domain knowledge. I have seen how many issues slip right on through, because nobody in the technical team realized there was a problem (and we all know how expensive things are to fix once they're out in the wild). Troubleshooting issues that *are* identified takes longer, because the root of the issue may not be immediately clear.

So, how well should testers know the customer? Well, testers and developers alike should have as much domain knowledge as possible, and should be as intimate with the customer as possible. Especially in the absence of an actual customer (or lack of constant access to one), the more domain knowledge the technical team has, the more likely they are to find issue early and track them down easily.

No comments: