Wednesday, 8 September 2010

We test the system and gather

I have noticed that it has been awhile since I did a blog due to family commitments and vacation time. I had an idea to blog about the importance of gathering evidence when testing especially when using an exploratory testing approach. I decided to take the example of why it is importance from an internal workshop that I run on exploratory testing.

So are you sitting comfortably?

Here is the story of Timmy the Tester and Elisa the Explorer

Timmy Tester

Timmy Tester has been given a new build to test and decides he is going to test it using the exploratory approach. Timmy writes that his mission statement is to test function x has been implemented.

He installs the release and starts to enter different values and pressing buttons around the new feature at random. He does this for about half an hour and then all of a sudden the application crashes.

Timmy takes a screen shot of the crash and a system dump of the crash and goes to talk to the developer. The first question the developer asks is

“Have you tried to reproduce the problem?”

At this point Timmy says no and goes back to try to reproduce the problem.

Two days later Timmy has been unable to reproduce the problem and now thinks it could have been one of those strange things.

3 months later the application is now live on the customer site. Within 30 minutes there are a large number of calls to support stating the application is crashing. The problem gets passed back to Timmy who notices that the problem appears to be the same as the one they saw when carrying out ‘exploratory’ testing….

Elisa the Explorer

Elisa has been given a new build to test and decides she is going to test it using the exploratory approach. Elisa creates a mission statement stating they are going to be testing the new function.

Elisa installs the new application and starts to enter different values around the new feature. As she is doing this Elisa has another computer in which she makes notes and takes screenshots at relevant points to aid clarity of each step that she has carried out. At certain points Elisa finds behaviour of the system which does not seem correct so she starts another mission statement to look into that behaviour. Elisa then starts to examine the strange behaviour in more detail making notes of the steps she is carrying out at the same time. All of a sudden when pressing a button the application crashes.

Elisa makes some notes, takes a screen shot and a system dump of the crash.

Elisa then resets the application back to a clean system and repeats the last set of steps which she had made a note of. The crash happens again.

Elisa then goes to see the developer and states she has managed to produce the problem more than once and here are the steps.

Elisa sits with the developer while they go through the steps together and the developer sees the crash.

Two days later Elisa has been given a fix for the crash. She now has an automated test for the crash and runs it straight away. The test passes and Elisa continues with the rest of her testing.

It may seem like common sense but I have seen more cases of Timmy than Elisa from people who have said they are using exploratory testing. It is extremely important to record everything and remember exploratory testing does not remove any of the principles of testing:

“All tests are repeatable”
“All problems are reproducible.”

There are many ways we can gather evidence of our testing sessions and there are a large amount of tools available to the exploratory tester. In much the same way that when the first humans decided to explore the North Pole they took tools with them that could help and support their efforts exploratory testers can do the same when exploring the system under test.

Maybe I should look at some of these tools and write a blog about them – or even better people who read this blog might be able to suggest some good tools that they have had experience of using.


  1. We used Session Tester and Rapid Reporter over the past year in Weekend Testing. Session Tester is great for the purpose, while Rapid Reporter takes the idea of screenshots even a bit further. These tools are definitely worth using when doing many ET. Occasionally pen & paper work for me, too.

  2. "It is extremely important to record everything"
    Do you really record everything you do?
    Doesn't it take away a lot of time that you could spend doing testing?

    My experiences are that I usually remember what I did that could have caused the issue, and if not I either use artefacts or log files; or my experience to provoke the issues from scratch.
    It sure has happened that I haven't been able to reproduce issues, but I firmly believe those wouldn't have been reproducible with detailed notes of everything "I" did (the system does a lot of things as well...)

    I prefer taking summary notes now and then, depending on the information required by others.

    I guess there might be audit concerns requiring detailed information about run tests, but wouldn't a video and keyboard recorder be more effective?


  3. You make some important points Rikard and I agree entirely with your comments. The aim of the blog post is say you must gather evidence, how you do this is up to you. I am not against the use of tools and encourage anyone working in an exploratory way to use whatever tools they can to gather evidence. I use video capture, logs, key loggers and various other tools that enable me as you rightly say to give me more time for testing. Maybe this post was too simplistic. I hope this explains a little more and thank you for your comments.

  4. Hi Markus

    Thanks for the information on the tools, I have used each of these with various amounts of success. I do like session tester just for the fact it gives you hints when you running out of steam within your session.

  5. Good post. Once again someone has to do the job (and you've done it) of sharing with the software world what's been accepted practice all along in other fields. Here, it's about the lab book. Exploratory testing is experimentation. Show me a lab scientist who doesn't keep a lab book ...

    Now the good side of software is supposed to be that the shoemaker doesn't go barefoot. So it makes a lot of sense that software testers, proficient with all sorts of automated or computerized tools, should apply them to make notetaking in a lab book easier.

    My personal preference, albeit for coding rather than testing, is a text editor. I tend to keep a running commentary of what I'm doing and trying. Should keep me from repeating my mistakes too frequently.