I have noticed that it has been awhile since I did a blog due to family commitments and vacation time. I had an idea to blog about the importance of gathering evidence when testing especially when using an exploratory testing approach. I decided to take the example of why it is importance from an internal workshop that I run on exploratory testing.
So are you sitting comfortably?
Here is the story of Timmy the Tester and Elisa the Explorer
Timmy Tester has been given a new build to test and decides he is going to test it using the exploratory approach. Timmy writes that his mission statement is to test function x has been implemented.
He installs the release and starts to enter different values and pressing buttons around the new feature at random. He does this for about half an hour and then all of a sudden the application crashes.
Timmy takes a screen shot of the crash and a system dump of the crash and goes to talk to the developer. The first question the developer asks is
“Have you tried to reproduce the problem?”
At this point Timmy says no and goes back to try to reproduce the problem.
Two days later Timmy has been unable to reproduce the problem and now thinks it could have been one of those strange things.
3 months later the application is now live on the customer site. Within 30 minutes there are a large number of calls to support stating the application is crashing. The problem gets passed back to Timmy who notices that the problem appears to be the same as the one they saw when carrying out ‘exploratory’ testing….
Elisa the Explorer
Elisa has been given a new build to test and decides she is going to test it using the exploratory approach. Elisa creates a mission statement stating they are going to be testing the new function.
Elisa installs the new application and starts to enter different values around the new feature. As she is doing this Elisa has another computer in which she makes notes and takes screenshots at relevant points to aid clarity of each step that she has carried out. At certain points Elisa finds behaviour of the system which does not seem correct so she starts another mission statement to look into that behaviour. Elisa then starts to examine the strange behaviour in more detail making notes of the steps she is carrying out at the same time. All of a sudden when pressing a button the application crashes.
Elisa makes some notes, takes a screen shot and a system dump of the crash.
Elisa then resets the application back to a clean system and repeats the last set of steps which she had made a note of. The crash happens again.
Elisa then goes to see the developer and states she has managed to produce the problem more than once and here are the steps.
Elisa sits with the developer while they go through the steps together and the developer sees the crash.
Two days later Elisa has been given a fix for the crash. She now has an automated test for the crash and runs it straight away. The test passes and Elisa continues with the rest of her testing.
It may seem like common sense but I have seen more cases of Timmy than Elisa from people who have said they are using exploratory testing. It is extremely important to record everything and remember exploratory testing does not remove any of the principles of testing:
“All tests are repeatable”
“All problems are reproducible.”
There are many ways we can gather evidence of our testing sessions and there are a large amount of tools available to the exploratory tester. In much the same way that when the first humans decided to explore the North Pole they took tools with them that could help and support their efforts exploratory testers can do the same when exploring the system under test.
Maybe I should look at some of these tools and write a blog about them – or even better people who read this blog might be able to suggest some good tools that they have had experience of using.