Pro-testing Against Bad Software Quality


Of Algorithms and Heuristics

It occurred to me, while reading James Bach's blog post Exploratory Testing is not “Experienced-Based” Testing, that I wasn't 100% sure I understood the difference between an algorithm and a heuristic. I did have a general idea but that wasn't enough; I wanted the gory details. So I decided to read up on them and that sparked the idea to blog about the subject in order to expose my understanding of the two for critique so that any potential misunderstandings can be corrected.

Do note that this post is not objective in the application of the two (in testing) and I'm not even trying to make it so.


Wikipedia defines an algorithm as:

In mathematics and computer science, an algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. Algorithms are used for calculation, data processing, and automated reasoning. In simple words an algorithm is a step-by-step procedure for calculations.

Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, will proceed through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state.

Immediately, in my mind, this doesn't chime very well with testing at all. Sure, algorithms sound like the perfect tool to use when making (automated) checks but for the sapient process of exploration and learning that is testing, the above definition just doesn't cut it. Below, I have listed some reasons why (I'm sure there are plenty more but these are just the ones that immediately occurred to me):

  1. An algorithm depends on a "finite list of well-defined instructions"
  2. An algorithm depends on the execution being deviation-free, as in, all successive states must be known beforehand and each identical run must produce identical results
  3. As a continuation to the previous point, an algorithm expects the ending state (and ending point!) to always be known regardless of inputs used, execution paths taken or external effects experienced
  4. Finally, "automated reasoning" does not sound like something I would want my product to depend on!

Somehow, all of the points above seem to perfectly match with the "traditional" test case approach used in so many (software) projects around the world. You know, the one where someone creates a set of test cases with exact steps to take, specific inputs to enter and detailed results to expect. The one where, after that, some unfortunate tester has to run through that set with military discipline while being denied the advantage of his or her testing experience, intuition, domain knowledge, imagination and creativity - among other beneficial qualities that might help produce good results.


I found Wikipedia's definition of a heuristic to be unsatisfactory as, for example, it refers to "common sense" in the definition which I dislike since it is as meaningless a term as "best practice", so here's Merriam-Webster's (better, in my opinion) definition instead:

: involving or serving as an aid to learning, discovery, or problem-solving by experimental and especially trial-and-error methods <heuristic techniques> <a heuristic assumption>; also : of or relating to exploratory problem-solving techniques that utilize self-educating techniques (as the evaluation of feedback) to improve performance <a heuristic computer program>

To me, this sounds like a much better match with testing (with which I still refer to the sapient process of exploration and learning). As the definition states:

  1. Heuristics are "an aid"
  2. Heuristics are a trial-and-error ie. fallible methods for finding (satisfactory) solutions
  3. Heuristics can be used as an aid for self-education

All of the three points above seem to me to reinforce and support good testing and the effort to become a better tester. One crucial point as to why, in my opinion, is the fact that the definition of a heuristic includes uncertainty in it that the definition of an algorithm is doing its best to take away.

I find it highly unlikely we could ever know all affecting factors at play when testing so I dare claim the uncertainty is built into the very essence of all testing and an attempt to remove that seems to me like an attempt to lull oneself into a false sense of security and a dangerous, unfounded confidence that should not be there.

As Michael Bolton said during the first Rapid Software Testing course in Helsinki: "We testers are in the (false) confidence-demolition business". Hear hear.


ParkCalc done differently

The third testing dojo was organized on Tuesday, May 31st in Helsinki. It was also the last testing dojo before summer break. We will be back in August!

In the earlier testing dojos the format was really simple: Pairwise exploratory testing of selected application with short debriefings after each 5-minute session, a break in the middle and then more testing before wrap-up and free conversation. Based on the feedback received, some people thought this boring and I can easily see why since people only got to participate actively for a total of 10 minutes (5 minutes recording and 5 minutes testing) during the entire 3-hour dojo and the rest of the time they were only allowed to observe.

While I do like this format as well, I felt like it needs something more to keep the event interesting and engaging. So, this time we did things in a completely different manner.

Changes in the programme

I had a couple of ideas before the dojo and only decided on the way we were going to go this time mere 5 minutes before session start. Here's what we did in the third testing dojo:

Participants were divided into 3 groups of 3-4 people each. Then the application to be tested was introduced (the infamous Gerald R. Ford International Airport parking calculator aka ParkCalc). The mission was: "You have been told to test this application. Each group will have about 15 minutes for the testing so you know you will not have enough time to cover everything. Come up with ideas on how you feel you could best test the application within the given time".

After 15 minutes of discussion (and after emphasizing that people are free to collaborate between groups as well) the groups were told to select the 3 ideas they saw as the most important ones and write them on a piece of paper. Since I am a mean, mean person, there was a twist at this point: I collected the papers, shuffled them and then gave them back so that no group would be testing their own top-3 ideas.

At this point we had a break because I wanted to kind of silently encourage people to use all the time available for collaboration. Plus, I had told the participants the same thing Michael Bolton told us during Rapid Software Testing course I had attended a week before: it's okay to cheat. In this context: if you have limited time for preparations, it's okay to take advantage of any time that you have together - and I'm happy to say people did that, too!

After the break the groups had 5 minutes each for asking clarifying questions from the group that had prepared the ideas, in order to gain a common understanding of what was wanted and to be able to test those ideas effectively. Then we started the actual testing. We had a brief discussion on how the groups would like to perform the test execution and it was decided that one person from each group sits at the computer and the other members feed him/her ideas and suggestions on what to test next. At this point we were a little over halfway through the time available for the dojo and I was very happy that the time had been spent on continuous collaboration within and between the groups.

Each group had about 15 minutes for test execution, constantly telling other participants what it was they were doing and why. After each session the group that had prepared the ideas got to assess the work done and decide if their requirements had been met or not.

After the test execution rounds and debriefings it was time for sauna and free conversation.


At the end of the dojo, I asked people to give feedback about the new format and the dojo in general and, as promised, here are the comments received (exactly as written):

- "A (really) short introduction on the purpose of exploratory testing in the beginning would be nice" (We had 4 new people participating in the dojo and I was so concentrated on refining the last details of the new format in my mind that I completely forgot this. Sorry, my bad!)
- "The group was diverse and brought / came with intresting ideas"
- "I don't know if there should have been more moderation for some of the discussions... I'm knew to this format"
- "3 topics in 15 mins means when you find something - need to move on"
- "Planning was good, many ideas"
- "+ Group planning"
- "+ Testing other group's plan"
- "+ Open discussion"
- "Idea connected to lightning talks: have someone give a presentation on some techniques and then try those"
- "Planning in the groups was nice"
- "Give little different focus for each group so that they can have more "fresh" look at the program"
- "Nice session, format worked quite well. Liked the idea of "management team""
- "A less obvious target would've been more interesting"
- "Sandwiches, even for vegetarians"
- "The welcome was a bit "cold", the person opening the door didn't even know about the dojo" (Again, my bad, I should have been at the door earlier myself. Sorry!)
- "Maybe a little more "intimate" time with the software would be good"
- "Bad design leads to people designing, not testing"
- "More focus on methods, easy to start chasing results"
- "Simple example helps keep focus"
- "Hands need to get more dirty"
- "Planning too long w/o access to spec or the software. Less time on planning & two rounds of testing (fb taken into account) would be nice"
- "Prioritizing bugs for their relevance would have been a fun exercise" (We did this in a previous dojo - need to incorporate it to this new format too!)
- "No focus on debriefing / reporting - should practice that"
- "Fixed ideas for third group gets a bit boring..."
- "More active session than previous times"
- "Deviding people in groups makes sense"
- "More communication & collaboration"
- "Better ideas generation etc..."
- "While a group is "performing" other groups may be bored / not follow (maybe it would make sense to keep them "actively" busy)"
- "Continue being proactive and organize those! Thats awesome!"
- "Discussion format(?) Result of the exercise? What was the conclusion? What did overlap / wasn't done? How do we are more efficient next time?"
- "More examples. Examples sell better than reason and to sell is what exp. testing needs"
- "Introduction?"
- "Less arguing, less battle of egos, more possibilities for creative thinking (is there really right or wrong here?)"
- "More "important" sw to test. Now people were thinking less of their testing, not yielding everything"
- "+ Groups nice change; less chaotic."
- "Too simple app?"
- ""Management" feedback idea needs improvement."
- "Good: Short missions = structure of dojo (ideas, prioritizing, questions etc.)"
- "Good: Discussion (with whole group & small team)"
- "Missing: Not everybody could do testing"
- "Perhaps reserve time for "best practices discussion""

It was great getting this much feedback. They will be taken into account when planning for the next testing dojo!


Testing Dojo, Helsinki, Nov 11th

UPDATE Dec 9th

Unfortunately I had to reschedule the testing dojo due to a work situation. Updated infos with a link to registration below:

The next testing dojo will be organized on Wednesday, December 15th, at 18:00-21:00 (with time for sauna and free conversation afterwards) in Uoma premises, as mentioned in the original post. For more information and registration, see: Testing Dojo II.


Last night, I participated in my first-ever testing dojo, right here in Helsinki, Finland. The event was organized by Tiina Kiuru in Reaktor premises. We had a total of 12 participants in addition to Tiina, with people's positions ranging from developers through testers to managers in a multitude of different kinds of companies. I thought this was a very promising start since the variety of the people and positions was guaranteed to also offer a wide variety of perspectives and approaches to testing. We even had some people in the dojo saying that they've never really done any kind of testing but were curious to learn more. Hear hear, congratulations to those people for participating!

Testing, testing

The purpose of the dojo was to do exploratory testing in pairs on a specific application (Pixlr Editor in this case) in a tightly time-boxed manner (5 minutes per tester). The testing was performed with one person doing the actual testing and another person writing down notes on the findings, what's been tested and what could be tested further in the future. The person performing the testing was also responsible for walking the audience through with what (s)he was testing and why. At first, people in the audience made suggestions on what to test but it was then agreed that the audience should only ask questions and not interfere otherwise unless the tester pair got stuck on something or ran out of ideas on what to test next.

We had simple, varying missions to perform like:
- open a specific picture and try to make it look like a printed target picture
- test the filters, see if they behave consistently and if they could be abused somehow
- test the history, see how many actions it will record and if the actions are recorded consistently

After each 5-minute testing session we had a short debriefing on what was tested and what did we learn which, in some cases, gave us some good ideas on what the next person could start the testing with.


At the end of the 3-hour dojo we were handed colored post-it notes to write down notes on what was bad, what was good, what did we learn and if something could be improved in the future. Personally, I was absolutely delighted to see people were so enthusiastic about the concept and it showed in the notes as well; there were a lot of good comments on what people learned and how things could be improved. The open conversation and sauna afterwards were really nice too!

This being the first time for a lot of people (myself included) performing this kind of testing exercises there was a bit of disorder in the beginning but in my opinion we got things under control pretty nicely (like forbidding the audience from interfering, which was found to be distracting by many of the participants) and I'm positive we'll do much better in the next testing dojo.

What's next?

Talking about the next time, we agreed with Tiina that I will be organizing the next testing dojo with Tiina's help, and it will be organized in Uoma premises in Punavuori (Merimiehenkatu 36D, 3rd floor to be exact). The date for the next dojo is still open, but I would expect it to be sometime mid-December or so. We'll discuss the exact details with Tiina next week and I will post an update to the blog once we've made some decisions.