Sunday, November 15, 2009

Change your organization, or change your organization

That's what Michael Feathers said to me at Agile 2009. It was the end of Patrick Wilson-Welsh's Ugly Code vs Clean Code clinic, and I had gathered a group of people to see if they could help me solve my problem: an organization with a *huge* legacy application (2.5 million lines of *true* legacy code -- no unit testing, no automation, eek!), stuck in a time long since past, where the development team is surrounded by a closely guarded fortress, and the test team was really just there to have someone to blame when customers complained.

After he asked me a series of questions (which really just confirmed that I had already been trying all the things that an expert would have suggested), he said something that stopped most of the group dead in their tracks for a split second: "Change your organization or change your organization". It was a phrase I had heard before, yet at that moment, it hit me like a ton of bricks.

There are countless resources out there for supporting those people who are "change agents", who help spark change where they work, who have gone through or are going through Agile transitions. There are resources for those who are dealing with the more difficult aspects, such as people who just resist change. Markus Gartner wrote up a great summary recently on his blog.

There are many techniques for pushing through resistance, but how can you tell a fight that isn't winnable? Or, at least, isn't winnable for you?

It seems to be something you just "know", from what I can glean. And it is certainly not specific to organizational change. Kenny Rogers sang that wildly popular song The Gambler, "You gotta know when to hold 'em, know when to fold 'em, know when to walk away, know when to run..." The question I have always wondered is "*HOW* do you know?"

I suppose it's not an answerable question at face value. Each situation is different, filled with countless variables, and there is not a single, universal answer.

What is the dark side of what happens to a change agent who is outnumbered, lacking in authority, and just shut down at every turn? Markus's term, "cultural infection" was what happened to me. The crazy culture that I inserted myself into became too much for me to bear, and infected my personal and family life. That was when I knew it was time to fold 'em.

The thing that has gnawed away at me for all of my (relatively short) time as an Agile champion, is "What happens to the people who just don't do well working this way?" or "What happens to the people who don't *want* to change?"

Sometimes, they are pushed out by those who do want to change. I've seen that, people who become so miserable because their interest in working in silos makes them the odd man out, and they eventually get so miserable that they leave.

But what happens when there are more people who prefer to work that way than people who want to change? Or worse (in my view), there may be more (in the organization) who *want* change than those who don't, but the group that doesn't want to change is concentrated in the people who develop the production code?

If an entire team of developers does *not* want to change to a more collaborative environment, or better, to Agile, how can any other department succeed, even if they *do* want to work in a more collaborative way? Can you incent people to change if they don't want to, or have no reason to change?

Believe me, I've tried. I've followed the patterns in Fearless Change, made the countless arguments *for* Agile available on the web, posted slides from various talks on my cubicle, talked to people until I was blue in the face, decided to run *just* my QA department in this fashion, asked the "just try it for a short time" questions, run some small tools on my own that would help the current development dilemma, show graphs and line charts and cost analysis studies, put together presentations. Still, I am forced to sit and wait until a release "in a pretty package" (yes, that has actually been said to me) is ready for my test team to have at it. Still, I get that pretty package, and can't do anything with it because the software doesn't install properly.

I've learned that there are organizations where even though the old-skool Waterfall style of doing things has so clearly gotten them into trouble, they don't see that. Worse, the developers continue to be held up on a pedestal, regardless of those signs. When this happens, the culture of the *whole organization* has obviously supported it, and possibly is continuing to support staying *just the way they are*.

I am reminded of a Jerry Weinberg quote: "Things are the way they are because they got that way".

I'd be willing to bet that even when an organization is this deeply rooted in their ways in the face of adversity, some can still embrace change and come out of it far better than they have ever been. I don't know how to tell the difference between those that can and those that are bound to continue repeating the same mistakes.

I've talked before about the "wall of pain". If an organization never lets those responsible feel the "wall of pain", they will rarely have any reason to change. For me, I've hit my personal "wall of pain" before the organization hit theirs.

Friday, November 13, 2009

It's still a *bug*, and your denial does not change that

UGH!! I've had this frustrating discussion with a developer recently that is really annoying the heck out of me.

In the particular scenario being discussed, this product has a history of not installing correctly. It's a (mostly) .NET desktop suite of programs with a central processing engine. The most common installation errors involve components not registering correctly. The crazy thing about that type of issue is that it manifests itself, for this particular product at least, in fairly inconsistent ways. Sometimes, a user function doesn't work (while sometimes it does!). Sometime, the processing engine locks up unexpectedly.

In the past 4 months, I am certain that I have seen the number wasted man-hours troubleshooting something that ended up being an installation problem climb well into the hundreds. In QA alone, I can cont at least a week, times 3 people. I'm already at 120 -- not counting support, developers, product management ....

There was a customer issue that they had spent the past 2 weeks troubleshooting, only to find out that a file had not been registered properly. So product management asked the developer the following:

Is there an identifiable bug in the installation code that we could track in Bugzilla for inclusion in a future release? Or is this issue a one-off?

And the developer's response was that there was no bug in the installation code, so there was nothing to fix. He went on to explain that the files has just not registered properly or were corrupt. He said that uninstalling the suite should have cleared the problem. (I don't want to copy his response, just because it's fairly product-specific.)

So this was my response to that (I was actually fairly nice in it, because seriously, it drove me *crazy* to see that "it's not a bug" statement!)

Though I can see where there is not a clear bug in the installer, I don’t think that is the same as “we are missing something and should implement some checks”.

It just really seems to me that *any* install that we do should be checking itself, that it installed correctly. I have seen dozens of man-hours wasted in just 4 months, on what came down to things not installed properly.

“It didn’t tell me that it didn’t go right” is a bug, in my book.

Now seriously, let's look at this issue.

If the software does not install properly, that's a bug, right?

If the software fails to install properly, AND does not indicate to the user that *anything* has gone wrong, that's a bug, right?

If this install problem causes the software to not work, that's a bug, right?

Is it considered okay to ask the customer to uninstall and reinstall their product (no trivial task, mind you) when it's not working? Okay, anybody but Microsoft?

Has 10 years of testing led me way far astray from the basics? Am I way off base here?

Lanette Creamer suggested that the developer was right; that it wasn't a "code bug", but rather a requirements failure. This is a valid point, but for some reason, I hate having to "frame" things in a certain way to placate a defensive person.

The bottom line is that this product is doing something it should not be doing, and it is impacting the customers, the developers, the support people, the QA people, the product people ... IT SHOULD BE FIXED. Seriously, don't throw blame, don't deflect attention away from yourself.

Let's instead sit down together and figure out where the problem is and how to solve it.

Wednesday, November 11, 2009

Harmony is beauty

First and foremost, if you read my blog. I have to suggest that you go grab yourself a copy of Beautiful Testing. It's a great read, and has inspired this post.

I've been inspired by reading this book. I've been reminded of my passion, of the reasons I have chosen to do what I do. After all, I chose to be a tester, I haven't just fallen into a job and decided I should deal with the hand I've been dealt.

I didn't always want to be a software tester. As a young child, I wanted to be a marine biologist! Actually, I wanted to study the neurological systems of dolphins. I was convinced that I was going to figure out what it was about their brains that made them so smart. I still have the book that inspired me then, 9 True Dolphin Stories.

After a marine biology course and choking through learning the ocean zones, I shifted my interest over to genetics. There was something about genetics that was *beautiful* to me. I was amazed at how the whole system *worked*. Granted, sometimes it goes awry, but at an incredibly rare rate. I liked it because it is so *simple*, and yet so completely *complex*. I was amazed at how 4 simple bases could turn into a *human being*. (HA! In the middle of writing this blog post, I went to Barnes and Noble and happened to come across a copy of Matt Ridley's "The Agile Gene"! I'm a Matt Ridley fan, and this title is a re-title, but very coincidental timing!)

When looked at that way, genetics and computers are rather easily related to each other. In genetics, a 4-letter alphabet turns into some complex stuff, like human beings. In computers, a 2-letter alphabet turns into some complex things, too, just not as complex, and too often, not as smoothly. So in a lot of ways, my switch from genetics to computer science was an easy one.

When I work toward making software beautiful, I have in my head a picture of how it's done in genetics. Those systems are exponentially more complex than software, and yet, there has somehow developed a *synchronicity* (Is that a real word?), where even given countless variables, the system is responsive and *works*, regardless of the state of all of those other variables.

Isn't that incredibly beautiful?

Reading Beautiful testing, I saw these same ideas threaded through the chapters.

Matt Heusser talked about math in terms of number theory and proofs, including some commonalities in nature (the Fibonacci sequence and pi, for example). In studying mathematics, Matt found an appreciate for what he calls "aesthetics" (and what I've likely been calling 'harmony'). His sentences on the aesthetics in mathematics hearkened back to my appreciation for genetics for the same reasons.

But it was Chris McMahon's chapter, relating creating software to an artistic performance, that inspired me to think about the way I work on a software team in a new light. He mentioned the way a band rehearses together before a performance. I have seen some *amazing* bands, where I watch them communicate with each other silently. The slightest gestures can be made, some not even perceptible to the audience, and the rest of the band follows suit.

These things happen all over the place! A new mom once told me that she enjoyed me coming over because I "just knew" what she was going to need before she did. Well, of course I did, I've been through the mom thing before. I used to ride horseback (dressage, actually) and with some of the horses I trained on, the *slightest* shift in my body weight caused the horse to change direction. I had to learn to control every muscle in my body to only send the messages I intended. Have you ever seen a couple, who at a party, seem to be able to send signals to each other with just a glance of the eyes?

Coming back to testing and software, it seems that the very best teams work in much the same way. They are able to shift direction, together, without falling out of step with each other. They are able to fill in for each other without the need to go through hoops to do so, and they trust each other completely. They all care about the same things, and work together to accomplish their goals, following the same values.

As a tester, I love the fact that I get to strive toward this kind of harmony, while also managing the effects of so many other variables: the customer, usability, risk, time, cost, value, technology, the business goals, product management, etc etc. For me, being involved in testing means that I have my hands in everything, and spend my time trying to make sure that they *all* work together seamlessly. As a tester, I must represent the customer. I must represent the potential customer. I must represent the developer. I must represent the business. And, I must tie all of these together in an intricate web, ensuring that the outcome represents all of them and is responsive to any variation in their needs.

THOSE are the reasons I am drawn to Agile. THOSE are the things I see in the teams I have had exposure to that I think work *really well*. THAT is why I love testing. Testing *is* beautiful.

Tuesday, November 10, 2009

The difference between motion and action

I just read this amazing blog post by Steve Blank, and felt compelled to share it.

I'll just post a link now, though I suspect my own spin on it may be coming ...

The Difference Between Motion and Action

Monday, November 9, 2009

Making Quality a Priority

Here is a quick PowerPoint presentation I think may be useful to some other people, too.

The intended audience is business people or executives who still need a little bit of understanding in what it means for "quality" to be baked into every part of software development, from inception through to release, and in what it takes as a team to have the best collaboration between developers and testers.

Please feel free to use it, modify it, etc. And as always, if you have feedback, let me know!

Wednesday, October 21, 2009

What if you could design your ideal "software creation" plan?

So, a special situation has been presented to me, and I'm going to try to solve it in a collaborative way. It's short notice, but I have what I have, you know?

I've been given an opportunity to develop out a roadmap for ensuring that our very LARGE legacy product heads in the right direction from this point out. Their current focus is apparently centering on the word "stability" Roughly, stability, to them, means that what we have designed our product to do, *works* and is satisfactory to the customer (things like performance cannot be ignored here, for example).

This team (people/process/infrastructure) roughly works on the level of a bunch of unorganized college kids, so there is a lot of room for improvement. They need some fundamentals put in place, for sure -- Automatic builds, CI, unit testing ... (clearly the list goes on from there). Although I specialize in testing, I believe that our executives now understand that "we can't test quality into our product" -- we have to be doing things with quality from the beginning. As I think about all the things this team needs in order to be working in a fashion that does actually move them forward, I want to be sure to optimize such a transition.

I want to hear from others, however, about what *they* would do, if given such an opportunity. I want to brainstorm as a group how people *wish* their teams worked. Let's draw up an ideal "software creation" plan -- beginning to end, all of the pieces.

I have a space available for a physical location, for those who are local. Please send me an email (address at the top right of the blog) for specifics.

I'll be offering a remote-in ability, too. I've created a GoTo meeting that offers both VOIP connection and phone-in connection.

I'll tentatively plan on holding this at 7:00pm EST today, Thursday, October 22 (Sorry! I meant the 22nd!).

1. Please join my meeting.

2. Use your microphone and speakers (VoIP) - a headset is recommended. Or, call in using your telephone.

Dial 630-869-1014

Access Code: 604-606-915

Audio PIN: Shown after joining the meeting

Meeting ID: 604-606-915

Monday, October 12, 2009

Getting Started with White

I've recently decided to dive into the world of Project White and MS UI Automation, since this is relevant to my current project. I had to create a tutorial for a team to get set up to use white to begin with, so I decided that I could share that tutorial with everyone else as well. It goes into some detail about Visual Studio as well, and is intended for an audience that has not necessarily done automated testing through Visual Studio before.

The tutorial uses MS Visual Studio, C#, White, and UI Spy to create an *extremely* simple test. My intent was just to show how to set it up. Next time, I will describe starting to write tests, including common functions in White and writing good tests.

Please, please offer feedback. I would love to be sure that I am doing things the best way possible!

White Tutorial

Wednesday, October 7, 2009

*be the worst*

Huh? Did I just say "be the worst"? Yep, I sure did. But before you go telling your boss that I told you to be the worst tester you can be, let me finish the phrase!

First and foremost, I have to give credit where credit is due. I saw this phrase first from Chris McMahon -- he says that this meme has been around the music business for a long time. He credits Pat Metheny, who said, "...try to be the worst guy in whatever band you're in. That's the secret."

Given the context, what I am saying is "be the worst of the people you are surrounded by", or "surround yourself with really great people."

At Agile 2009, someone told me that Elisabeth Hendrickson decided to learn by inserting herself into the best teams she could find. Even before I encountered the "be the worst" quote, I had begun bouncing around the idea of how I perceive myself versus those I work with.

I think back to an early job in my career, straight out of college. At this place, I remember thinking to myself with some frequency, "These people are *SO* *SMART*! I feel SO DUMB when I am around them!" I often tried to just keep up with conversation, hoping to fake it long enough to avoid appearing dumb, too! Looking back, how I wish I could have gotten over my own insecurity and taken the opportunity for exactly the opportunity it was! What I should have been thinking was "Wow, these people are *SO* *SMART*! I want to learn everything I *can* from them!"

What can we get out of "being the worst", and why would anyone suggest that?

I think we can get a *lot* out of it: learning, experience, growth ... In working with people who have a set of skills that you wish to expand for yourself, you can see first-hand, on a day-to-day basis, and under a whole slew of circumstances how that quality is manifested. Sometimes it might be a technical skill. Maybe it's a communication skill.

As I have grown in my career, I have begun to feel like the "dumb one" less and less. I think that my passion to get better and better at the things I want to be good at, have made it more and more difficult for me to *be the worst*. What have I done in response?

I've become *way* more active in the agile community. In that way, I can surround myself (though not as frequently as I would like) with those people I see as *way* more skilled than I am in certain things. I found this out with certainty at Agile 2009. I love talking to Elisabeth Hendrickson for her insight into agile testing and human relationships at work. I enjoy Lisa Crispin's company for her amazing ability to be a great agile tester, without falling back on programming the hard things (like I do!). I met Patrick Wilson-Welsh, and admired his passion, sense of humor, and ideas on good, clean TDD. I had conversations with Michael Feathers and Bob Martin to try to gain some insight into my specific legacy code issues. I had great conversations with Antony Marcano and Andy Palmer about testing tools and frameworks. I *pair programmed* with Abby Fichtner to gain from her development experience. Of course, there were many others ....

In this way, I forged relationships and surrounded myself in a way that I could *be the worst*. I'll keep saving my pennies to go to conferences and keep being active in the agile community so that I can keep *being the worst*.

Do others try to put themselves into situations where they can *be the worst*? How do you keep yourself always learning and always surrounded by those you can learn from?

Monday, October 5, 2009

Does having a separate maintenance team hurt efforts toward accountability?

I was recently thinking about a situation that puzzled me, and came to a realization that makes me a bit nervous ....

I have had exposure to some teams where even though they are in firefighting mode more often that is comfortable, and/or even though their customers are unhappy, and/or even though their software has a *lot* of bugs, they still don't seem to "get it" and try to do things in a better way.

I have sat, baffled, looking at people who lead these teams, my jaw on the floor, wondering why the people leading these teams are passing the buck on the team's responsibility for problems, and instead presenting a magic show to prove why the problem is not theirs.

And today, something hit me: I was thinking about what would cause someone to change; what would make it so that they finally felt the need to do something different, and the term "the wall of pain" came to mind. This term comes from a blog post I read a few months ago, "Fighting Technical Debt with the Wall of Pain". What would instigate change? Enough pain that it feels necessary, in order to escape the pain.

This reminds me of Parenting 101: How do I stop my 12-year-old from making irresponsible decisions? I make them painful enough that he will be sure to avoid the pain. Luckily, these days that is as easy as "Oops! Your PSP ran away! I can't find it!", or "Where are your Playstation controllers? Gee, I don't know ... I guess maybe they will return when you start doing the chores that you are responsible for."

BUT ... BUT ... what about companies where the new development team is a separate entity from a maintenance team? What if the developers who are writing new code are sheltered and protected from the results of poorly written code? In some cases, it goes something like this:

- Development team writes software ... Maybe they fix a few bugs along the way. Sometimes, if bugs come up and it can be proven that they existed in previously written code, they may shrug them off as not the result of current work, so not their problem right now.
- Development team's schedule gets squished, and as it gets closer to release time, only the *most* important bugs that are proven to be the result of their current work, are even looked at by the team.
- Release happens. At this point, product is a released product, and the workflow now looks like this:
- Customer has an issue, customer calls Support. Support tries to reproduce it, and if they can, they file a bug. Bug gets reported to Maintenance team, and they work on it.

In the above scenario (which exists in some companies), the team that originally wrote the aforementioned poorly written code never sees the wall of pain. Seriously, for the most part, there is absolutely zero repercussion to them directly for bad code.

In a case like this, the responsibility for well-written code falls upon those who are internally motivated to do their best, and strong enough personalities that they fight the push to get *something* out *faster*, well-written or not. I believe that this type of personality does not describe the majority, and the whole team falls into a pattern of continuing to write poorly written code. What is there to stop them?

Has anyone had any experience helping a team to transition into a more agile-like process from a situation that looked like this one? How do companies like this adjust to fight this pattern?

I wonder if it would help to break up the idea of "new dev" versus "maintenance" teams, and instead cross-pollinate into functional teams. In this case, there would be a team for one specific module, component, or module, and they would handle both new work and bug fixing/maintenance. Would this strategy work?

Friday, September 25, 2009

Work to make a company successful, or work for a successful company?

Once in a while, I think we all find some statement we read somewhere, while doing something, that sticks with us. I have *definitely* found that to be true very recently, mostly because it puts into words a concept that has rolled around in my head for quite some time.

First, some background .... If you read my blog, then you know that I have been known to rant, at times. Really, I try not to, but it happens. Sometimes, it is just that I want *so badly* for others to change, and I have a hard time understanding why people don't just *do* things the best way, all the time. Often, when I find others to talk through the issues I am encountering, they ask me questions like "Why do you still work for that place! It's hopeless!" (Sometimes, I believe it is, and I leave, as I did in 2008.)

Sometimes, however, I stick around for a while, and I have a hard time explaining why. And then, recently, I saw a sentence that resonated with me.

Twitter was all abuzz recently with responses to Joel Spolsky's recent blog about "The Duct Tape Programmer". He quotes Jamie Zawinski, and Elisabeth Hendrickson (@testobsessed) pulled out Zawinski's resignation letter from AOL.

This one quote resonated with me, and continued bouncing around in my head:
" can divide our industry into two kinds of people: those who want to go work for a company to make it successful, and those who want to go work for a successful company."

It rolled around in my head for a few days .... Did I agree? Did I think that people could generally be so easily divided up into just those 2 groups? Probably not quite *that* easily, but for many people that I have encountered in my career, I could place most of them into one of those pretty quickly.

For a while, I have described these groups in the following way: people either seem to come to work because it's a job, and they do their job, and that's it, OR they seem to be really passionate about *what* they do, and are constantly striving to make things better. I fall into the latter of those 2 categories. I most certainly do what I love and love what I do.

But Jamie's sentence looked at things from just a slightly different point of view, and after thinking about it for a bit, I believe, describes *me* just a little bit better. I have *always* wanted to work to make things great, and not so much wanted to work for/with/on things that were *already* great.

This can be applied to many aspects of my life. I was the single mom who put herself through college, working full-time and taking full-time classes in CompSci. I am the vocal advocate of a high functioning autistic child who wrote to the local newspapers when the school district was failing at its job. I chose to bring a Siberian Husky into my home (this sounds silly, but Husky owners will tell you .... WOW).


That's it, that's me. That's why I like testing over development. I have said for years that developers have to find one way to solve a problem and testers have to find ALL ways to un-solve the problem. Testing gives me the challenge of being creative, being technical, being analytical, speaking in at least 2 different languages, juggling an obscene number of balls in the air and being squeezed and sometimes disrespected along the way. But, it's a challenge, and I like a challenge.

In the same way, companies that are struggling, companies that don't "get it", and have fallen into the proven patterns that destroy great ideas, are a CHALLENGE. They are high-risk, for sure, and sometimes they offer little reward other than knowing I am doing my best, but *if* I can affect change, *if* I can help, the reward is incredible.

I believe that for "successful" companies (I place "successful" in quotes because this is a relative term .... for me, in this context, it is mostly about doing things the best way possible), I would personally gain little reward because there would not be *enough* of a challenge.

I wonder, then ... how many other great testers that I know are like-minded? Do other people enjoy testing because they enjoy a challenge? Would they generally choose to work to make a company successful, or choose to work for a successful company?

Saturday, August 15, 2009

What do you do when ... ?

What do you do when you are "just a QA person" or "just a tester", and you *know* *know* *know*, down in the deepest depths of your coding soul, that the developers on your team are DOIN IT RONG?

Do you quit the job and go find some decent coders? Nah ... What fun is there in that?

Yesterday, I attended Agilepalooza 2009 in Charlotte, NC. There were 2 main tracks, one for learning about agility with several speakers lined up and one for "advancing agility", which was Open Space format. David Hussman was there, and so was Jeff Sutherland, with whom I got to have a great discussion about testing on an agile team. It was a VersionOne sponsored event, and yes, it was myself who yelled out during the introductory meeting "Does anyone use Rally?" You don't have to believe me that I honestly did not know at that point that V1 sponsored it, though I sure played on that point later.

ANYWAY, I digress. I *love* going to conferences, especially those that allow for the opportunity to have in-depth discussions with others out there "in the trenches". I get some practical knowledge out of talks, but I get the *most* out of talking to individuals, and being able to say "I am struggling with this particular problem right now ... have you dealt with this before, and/or do you have any suggestions?" I have found that most of the Agile community, and especially the testing-focused subset, are always very willing to help and share and ask for your help right back.

So I had the opportunity to ask a few people my most pressing "What do you do when ... ?" questions, and here are some of my highlights.

Q: "What do you do when you know that the software you are developing against is unstable and likely not even following normal standards?"
A: Try Fxcop (I'm working on .NET apps), or FindBug for Java. WOW! I have actually heard of FindBug before, but haven't worked that deeply on Java programs in some time, so I have not spent much time with it. I got an opportunity to check out the fact that Fxcop has a whole type of messages centering on Performance, which happens to be a specific focus for my current project. I CAN'T WAIT to be able to use it.

Here's where this answer gets creative though. In true "don't pull punches" form, I would be the type to toss all 8,000 errors (arbitrary number) at my dev team. Instead, it was suggested that I pick a SINGLE error, print it out, and take it to a developer. I can tell them that I was using this tool (Fxcop), and here was an error (relevant to current work) that might help our current efforts. I love the idea, and I don't think I would have ever thought of it on my own. It is so simple, yet so ingenious .... I'll be doing this one soon.

Q: "What do you do when you are having trouble automating tests around custom controls that require some specific events to be called?"
A: Grab a network sniffer and watch exactly what events get called in the background! In this particular case, the suggestion was to use Knoppix (a lightweight Linux distro), which has a sniffer built in, or Chris McMahon suggested wireshark.

So, my background on this question is that I have been trying to throw some automated tests onto our web client, and we use Infragistics controls. I know that IG (Infragistics) uses Webaii internally, but still have been unable to get through exactly what I need to call and in what order to get Webaii to interact with these guys. The disappointing thing is that Telerik, who released Webaii AND who also makes custom controls (RAD Controls), posts right in their forums and on blogs about how to automate their controls. Infragistics does not. I have tried many permutations of interactions with these controls (my immediate need has been a WebCombo and a button), and have eventually had to put them on a back burner for other tasks.

However, if I can get past that one hurdle immediately, I can move forward with starting the seriously overdue task of putting some automation around our software.

Ok, those are my 2 biggest takeaways from yesterday, and I am sure there will be more later .... For the record, I have to give credit to Jared Richardson for the ideas presented above. It was really great to meet him and to talk with him about some specific issues, even if he did almost fall off the table listening to me :)

Thursday, June 4, 2009

Help! My Selenium tests are flaky!

In my development of Selenium tests, I have come to dread the oh-so-common "Permission denied" errors. A quick scan of google search results indicates that there is likely some cross-domain issue, and a suggested workaround is using an experimental browser, like *iehta. I generally use *iehta anyway, and still seem to have problems with the "Permission denied" error.

In my most recent experience, testing against what seems to be an inconsistently slow and sometimes flaky web app, I have come to expect that "Permission denied" usually indicates that an element that I am trying to access is not yet accessible. For my app, which is heavily AJAXed, "waitforpagetoload" doesn't cut it. Since the AJAX scripts are still running, Selenium would wait until the end of time (or the timeout, whichever comes first) for the page to load and it would not.

So, I found a few people who solved this problem by creating a function that would wait for a specific element to load. I decided to go with it, and now my "WaitForElement" function is standard in my local Selenium Template for Visual Studio.

This function looks like this (this is C# code):

private void WaitForElement(string elementName)
for (int second = 0; ; second++)
if (second >= 120) Assert.Fail("timeout");

if (selenium.IsElementPresent(elementName)) break;
catch (Exception e) { }

I call this function the first time I am going to take action after the web app has has to re-load or re-render elements. I tend to pass it the element which I am wanting to interact with first.

So in my test, the steps tend to look like this:
selenium.Type("txtBox1", "username");

I have seen some other suggestions from people on stabilizing selenium tests, but would love to hear from anyone else who has worked out a way to make their selenium tests less fail-prone.

Tuesday, May 5, 2009

Back to blogging ... Selenium and modal dialogs

It has been *way* too long since my last blog post, but another post is better late than never, right?

Most recently, I have struggled through automating a heavily AJAX-ey and fairly complex web interface using Selenium.

I have a setup that looks like this:
Visual Studio C# Express
Selenium RC
IE Developer Toolbar, for identifying objects on the page
(and now AutoIT)
The occasional use of the Selenium IDE for speed and efficiency's sake.

I was having this particularly stubborn problem with a little dialog that popped up to confirm completion of some action, such as "User profile saved successfully". To this day, I am not 100% certain that I can identify what to call this guy: A pop-up? Alert? Confirmation? IE modal dialog?

Selenium IDE did not record or recognize in any way that this dialog exists. Further, my IE developer toolbar does not recognize this dialog when it's up, either. This all makes identifying it a bit difficult.

In my Selenium script, I tried identifying it as a pop-up, I tried identifying it as an alert, confirmation, and prompt, and none of them worked. My research indicated that this might be an "IE modal dialog" -- which is not supported by Selenium.

As I reached out to the agile-testing Yahoo group for help, I asked the developers how this dialog was being created. They showed me some code that simply called a javascript alert. I am still a little confused about that part, since Selenium RC should capture javascript alerts and close them. (Eventually I figured out that if the alert was called without a page redirect, Selenium behaved as expected, but if it was called as part of a page redirect function, Selenium failed to get it.)

Here is what the code that calls this dialog looks like:

The only difference between that and a similar dialog that Selenium correctly handles are those couple of lines that do the redirect ("parent.SetIframeMainModule"). I am still unclear about why that throws a wrench into my Selenium script, but the bottom line was that my Selenium script was not running!

Chris McMahon responded to my thread on the agile testing list that he had successfully used AutoIT to "hit it with a hammer". After a little bit of searching online, I found a blog post where someone else had used this tool -- in this case, the poster created an executable that looked for windows with certain visible text, and just closes them. This script runs separately from the Selenium test, just constantly polling the windows. It's an ugly hack, but when I tried it, it worked and allowed my Selenium tests to run.

To give proper credit, here is a link to the blog that posted this particular solution:

Get rid of those pesky IE dialogs with AutoIt

I would still love to hear about more graceful solutions ... I understand that Watir has found a way to handle these types of issues, and would love to see Selenium also be able to handle them. Please comment if you have solved a similar problem in a different way!