Pro-testing Against Bad Software Quality

25Dec/110

Of Algorithms and Heuristics

It occurred to me, while reading James Bach's blog post Exploratory Testing is not “Experienced-Based” Testing, that I wasn't 100% sure I understood the difference between an algorithm and a heuristic. I did have a general idea but that wasn't enough; I wanted the gory details. So I decided to read up on them and that sparked the idea to blog about the subject in order to expose my understanding of the two for critique so that any potential misunderstandings can be corrected.

Do note that this post is not objective in the application of the two (in testing) and I'm not even trying to make it so.

Algorithms

Wikipedia defines an algorithm as:

In mathematics and computer science, an algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. Algorithms are used for calculation, data processing, and automated reasoning. In simple words an algorithm is a step-by-step procedure for calculations.

Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, will proceed through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state.

Immediately, in my mind, this doesn't chime very well with testing at all. Sure, algorithms sound like the perfect tool to use when making (automated) checks but for the sapient process of exploration and learning that is testing, the above definition just doesn't cut it. Below, I have listed some reasons why (I'm sure there are plenty more but these are just the ones that immediately occurred to me):

  1. An algorithm depends on a "finite list of well-defined instructions"
  2. An algorithm depends on the execution being deviation-free, as in, all successive states must be known beforehand and each identical run must produce identical results
  3. As a continuation to the previous point, an algorithm expects the ending state (and ending point!) to always be known regardless of inputs used, execution paths taken or external effects experienced
  4. Finally, "automated reasoning" does not sound like something I would want my product to depend on!

Somehow, all of the points above seem to perfectly match with the "traditional" test case approach used in so many (software) projects around the world. You know, the one where someone creates a set of test cases with exact steps to take, specific inputs to enter and detailed results to expect. The one where, after that, some unfortunate tester has to run through that set with military discipline while being denied the advantage of his or her testing experience, intuition, domain knowledge, imagination and creativity - among other beneficial qualities that might help produce good results.

Heuristics

I found Wikipedia's definition of a heuristic to be unsatisfactory as, for example, it refers to "common sense" in the definition which I dislike since it is as meaningless a term as "best practice", so here's Merriam-Webster's (better, in my opinion) definition instead:

: involving or serving as an aid to learning, discovery, or problem-solving by experimental and especially trial-and-error methods <heuristic techniques> <a heuristic assumption>; also : of or relating to exploratory problem-solving techniques that utilize self-educating techniques (as the evaluation of feedback) to improve performance <a heuristic computer program>

To me, this sounds like a much better match with testing (with which I still refer to the sapient process of exploration and learning). As the definition states:

  1. Heuristics are "an aid"
  2. Heuristics are a trial-and-error ie. fallible methods for finding (satisfactory) solutions
  3. Heuristics can be used as an aid for self-education

All of the three points above seem to me to reinforce and support good testing and the effort to become a better tester. One crucial point as to why, in my opinion, is the fact that the definition of a heuristic includes uncertainty in it that the definition of an algorithm is doing its best to take away.

I find it highly unlikely we could ever know all affecting factors at play when testing so I dare claim the uncertainty is built into the very essence of all testing and an attempt to remove that seems to me like an attempt to lull oneself into a false sense of security and a dangerous, unfounded confidence that should not be there.

As Michael Bolton said during the first Rapid Software Testing course in Helsinki: "We testers are in the (false) confidence-demolition business". Hear hear.

Comments (0) Trackbacks (0)

No comments yet.


Leave a comment

No trackbacks yet.