Pro-testing Against Bad Software Quality

27Dec/130

A Little Help with Jolla

Table of Contents:

  1. Introduction
  2. Enable Developer Mode
  3. Extend battery life by disabling the buggy NFC detection
  4. Share your Jolla Internet connection
  5. Fix the broken "From:" field in outgoing emails
  6. Add your own shortcuts to FingerTerm
  7. Force FingerTerm to source .bashrc and .bash_profile when launched
  8. References

Introduction

Note: I will be updating this post whenever I come across new useful tips, so do check back from time to time.
Last updated: Feb 4th, 17:12 (updated the paragraph about NFC detection since the issue has now been fixed).

Unlike earlier posts, this post is not about testing - at least not directly. This post is about the new smartphone Jolla, made by a band of Finnish ex-Nokia employees who felt MeeGo, a mobile operating system dumped by Nokia, is worth saving.

Obviously, I can't help doing testing while using a new device and I've already discovered a bunch of issues with the device but I will not cover them in detail in this post. Instead, what I am going to concentrate on are the various tips and tricks that you can do to help improve your Jolla device (and, also, as reminders for myself should I ever need to perform all these tweaks and fixes again).

This post assumes a basic level of Linux expertise as all of these things will be done on command line, using the phone's terminal (or a remote SSH connection), after you enable Developer mode.


Enable Developer Mode

- Go to: Settings -> System Settings -> Developer mode (under "Security")
- Enable Developer mode and accept Developer Terms
- Enable Remote connection (only necessary if you plan to SSH into the phone which I, personally, find useful to do since it's much faster to type on a physical keyboard)
- Set a password (this will be your devel-su password later on)

After this step, when you browse the phone's applications, you will find a new app called "Terminal" on the list. You will need this to make the changes listed below - unless you decide to SSH into the phone through USB instead.

Extend battery life by disabling the buggy NFC detection

UPDATE Feb 4th: The 1.0.3.8 update "Naamankajärvi", released on Jan 31st, addresses this issue and it is no longer necessary to perform the steps listed below. I will leave the advice as it is, though, just in case it should become relevant again - for whatever reason.

There have been complaints about Jolla's poor battery life and the problem seems to boil down to a buggy piece of code that causes the device to "ping" the NFC chip on The Other Half (the back cover) non-stop[1]. While you could get rid of this issue by either removing the back cover entirely, removing the NFC chip from the inside of the back cover or, apparently, also by covering it with tin foil there is a better solution: disable the service responsible for the NFC detection. Here's how to do that:

Launch terminal (or SSH into your Jolla), "devel-su" to become root and type in the following:

systemctl mask tohd.service
systemctl stop tohd.service

This small tweak stops the buggy service and prevents it from restarting when you reboot your device. As a result, this should extend your Jolla's battery life (while idle) from less than 24 hours to several days.

Note: Doing this will also prevent your phone from switching to the ambiences etc. provided by The Other Half but the changes are easy to undo once the issue has been fixed (or you want the phone to react to The Other Half changes again): just replace "mask" with "unmask" and "stop" with "start".

Share your Jolla Internet connection

When Jolla was launched there was no application for tethering i.e. sharing the phone's Internet connection. At the time of writing a (WLAN/BT) tethering app is available through Jolla Store, and USB tethering support will be added in a future update[2].
Here's how you can share your Jolla's Internet connection through USB[3] while waiting for the system update that makes this a built-in feature of the device:

Launch terminal (or SSH into your Jolla), "devel-su" to become root and type in the following:

echo 1 > /proc/sys/net/ipv4/ip_forward
iptables -t nat -A POSTROUTING -o rmnet0 -j MASQUERADE
iptables -t nat -A POSTROUTING -o wlan0 -j MASQUERADE
iptables -A FORWARD -i rmnet0 -o rndis0 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i rndis0 -o rmnet0 -j ACCEPT
iptables -A FORWARD -i wlan0 -o rndis0 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i rndis0 -o wlan0 -j ACCEPT

Note: Whenever Jolla is rebooted, for whatever reason, these rules will be cleared so I would recommend storing all of the above in a shell script that you can execute (as root) whenever you want to re-add the rules. I'm sure this could be done automatically whenever the device starts up but I haven't done that yet (to be honest, I don't know where the script should be stored, exactly. Anyone?)

Note: rmnet0 is the GSM interface, wlan0 is the WLAN interface, and rndis0 is the USB interface. I added the WLAN part since it was not mentioned in the referenced post and I felt it would be stupid not to share both connections; especially since Jolla tends to hop between the WLAN and GSM connections quite a bit if both are available.

After that, when you want to connect to the Internet through Jolla, you will need to give the connecting device (laptop, desktop, whatever) a private IP in the 192.168.xxx.xxx range (I use 192.168.2.14), set 192.168.2.15 as the default gateway (that is the default IP of Jolla's USB interface) with netmask 255.255.255.0 and any DNS server you like (I use the Google DNS 8.8.8.8).

Fix the broken "From:" field in outgoing emails

There is an issue with outgoing emails in Jolla at the time of writing where, in some cases, the "From:" field appears only as an account name, without the domain part e.g. "From: typo" in my case. Apparently, the reason behind this is that Jolla takes the "From" field value from the login name used when you authenticate yourself to your mail server. In many cases e.g. Gmail this is not a problem since your login will be your Gmail address. However, if your login name is not a valid email address and, specifically, the one you want to be shown in your outgoing emails, you have a problem. Fortunately, there is a fix[4] for this issue:

Launch terminal (or SSH into your Jolla) and type in the following (do not "devel-su" in this case - you don't want to modify the root's account):

pkcon install libaccounts-glib-tools
ag-tool list-accounts
ag-tool update-service N email string:emailaddress=account@domain.tld
ag-tool update-service N email string:fullName='Firstname Lastname'

Where "N" is the ID number of the account you wish to modify i.e. "Email" in this case, "account@domain.tld" is the email address you wish to show in your outgoing emails and, "Firstname Lastname" is your name, obviously.

Add your own shortcuts to FingerTerm

Jolla uses FingerTerm as its terminal application and one of the nice things about it is that it allows you to create your own command shortcuts that you can then execute with just two taps (one to open up the menu and one to run the command). Personally, I added a shortcut to quickly SSH to a server I use for my IRC screen. Here's how to create your own shortcuts:

Launch terminal (or SSH into your Jolla) and type in the following (again, without doing "devel-su"):

cd .config/FingerTerm
vi menu.xml (or, if you prefer, you can use nano as the text editor by doing "pkcon install nano")

After this, you can run your shortcuts from the FingerTerm menu by tapping the three horizontal lines on the top-right corner of your screen.

Force FingerTerm to source .bashrc and .bash_profile when launched

By default, launching Terminal on the phone doesn't source the .bashrc and .bash_profile files which, among other things, causes your own aliases to not be loaded when the Terminal starts. While it may not be a very elegant solution and should probably be used as a temporary workaround only, there is a way to fix this, as explained in the related thread[5] on the excellent together.jolla.com site:

Launch terminal (or SSH into your Jolla), "devel-su" to become root and type in the following:

cd /usr/share/applications
vi fingerterm.desktop (or, if you prefer, you can use nano as the text editor by doing "pkcon install nano")

Then replace the line:

Exec=fingerterm

With:

Exec=sh -c "cd ~; unset POSIXLY_CORRECT; exec fingerterm"

And that's it; after this, your shell will always source the two files when the Terminal is launched.


References

1: http://reviewjolla.blogspot.fi/2013/12/jolla-battery-life-power-consumption.html
2: https://together.jolla.com/question/3798/usb-tethering/
3: http://talk.maemo.org/showpost.php?p=1391586&postcount=804
4: http://murobbs.plaza.fi/1712267254-post4.html (only in Finnish)
5: https://together.jolla.com/question/5561/make-fingertermbash-startup-execute-standard-profilebashrc-scripts/

6Apr/134

An Object Is What the Object Does – Or Is It?

It's been a while since my last blog post and earlier today I came across a post that started a thought process I just had to write down. This post is based on the blog entry titled "“Testing vs Checking” : ‘Tomato’ vs ‘Tomato’" by Mario Gonzalez of Ortask.

As a quick recap, the gist of Mario's blog post is to criticize the differentiation James Bach, Michael Bolton and, generally, the people aligned with the context-driven approach to testing make between the terms "testing" and "checking". He basically goes on to claim the whole differentiation is a moot point or even potentially harmful to the testing community, and its future, as a whole (in terms of adding confusion to a field already swamped with definitions). Simplified, but that is my interpretation of the text.

[UPDATE April 8th]: As pointed out by @JariLaakso in Twitter, not all context-driven testers necessarily make the distinction between testing and checking, so I can't really say "generally, the people aligned with..." as I am not aware of the exact number of people who actually do. I may have simply just communicated with people who do make the distinction so reformat that one sentence in your mind to your liking. [END UPDATE]

However, I am not going to go any deeper into that or any of the other problems I perceived while reading through the post, at least not for now - I'm sure other people will. Instead, I will concentrate on the one thing that bothered me the most at the time of reading it: Mario's definition of a test. So I decided to dissect it. Here is the definition he offered:

a test attempts to prove something about a system

First of all, that is not even a definition of a test - it is a definition of the purpose of a test. Let me clarify with an analogy. Consider this for a definition of a car:

A car is used to transport people and goods from one place to another

Now, based on that "definition" alone, answer me the following questions:

  1. What does a car look like?
  2. What principles and mechanisms does a car operate on?
  3. Under what conditions and circumstances can a car be used to transport people and goods from one place to another?

You can't, can you? That's because I haven't given you a definition of a car - only what it's typically used for. In other words:

Defining what an object does does not define what the object is.

While still lacking and incomplete, a definition of a car could be something like: "A car is a typically four-wheeled, land-based vehicle that usually operates on the principle of an internal combustion engine turning the energy contained within a liquid fuel into mechanical movement through a series of controlled explosions the pressure of which cause a crankshaft to rotate and apply torque to the vehicle's drive wheels".

While the non sequitur I pointed out above would be reason enough to stop going any further, I want to go through the definition (of the purpose of a test) for the sake of completeness:

  • "A test" - Now what is that? In this context the question is impossible to answer - Mario hasn't told us!
  • "attempts" - How does "a test" attempt anything? It's not a sentient being. It would seem to me it is the tester who is the one to make the attempt through performing a test, making observations, analyzing and interpreting data, behavior and results before, during and after performing a test.
  • "to prove" - What, exactly, constitutes proof? Here are some definitions of the term:
    • evidence or argument establishing a fact or the truth of a statement (Oxford dictionaries)
    • the cogency of evidence that compels acceptance by the mind of a truth or a fact (Merriam-Webster)
    • sufficient evidence or an argument for the truth of a proposition (Wikipedia)

Now, how does one arrive at "proof", based on the above definitions? An obvious problem that immediately comes to mind is that, in many cases, "truths" or "facts" are relative and dependent on a number of factors. Don't believe me? Well, this is the last sentence in this blog entry. True at the time of writing, but false only seconds later when I kept going.

Also, if you want to pursue the scientific angle, I don't think anyone would take the result of a single experiment as any kind of proof of anything. You would need to repeat the experiment multiple times (and, ideally, have an independent peer group do the same) in order for it to gain any kind of credibility but therein lies a problem: the conditions would need to be exactly the same every time and that is virtually impossible to achieve in software testing. The date and time change, the amount of available CPU power, RAM and hard disk space varies, there might be network congestion or packet loss that you can not possibly predict, another application might suddenly misbehave and distort your test results or any number of other, unknown, factors that can affect the outcome of a test could manifest themselves.

It would seem to me that "proof" is much too strong a word to use in this context. Testing may suggest behavior that appears consistent but it can not prove that any more than a turkey being generously fed on a daily basis can predict that on the 180th day of its life the farmer comes out with an axe instead of seeds and takes the turkey's life instead of feeding it.

On with the definition:

  • "something" - Anything? Well, Mario did elaborate on this in the following paragraph so I'll just leave it at that.
  • "about a system" - Now there's another interesting word: "system". Various definitions to follow:
    • 1 a set of things working together as parts of a mechanism or an interconnecting network; a complex whole, or
    • 2 a set of principles or procedures according to which something is done; an organized scheme or method (Oxford dictionaries)
    • a group of devices or artificial objects or an organization forming a network especially for distributing something or serving a common purpose (Merriam-Webster)
    • a set of interacting or interdependent components forming an integrated whole or a set of elements (often called 'components') and relationships which are different from relationships of the set or its elements to other elements or sets (Wikipedia)

Complexity. On multiple levels. Especially when talking about computers and software. What if there is a fault in the hardware such that it exhibits certain behavior at one time but a different behavior at other times? Maybe a capacitor is nearing the end of its life and causes a test to give different results based on the ambient temperature of the environment in which the system resides. How many are prepared to, or even capable of, anticipating and accounting for something like that?

I'm not even trying to be funny here - my stereo amplifier does this and for exactly that reason!

Based on all of the above, I'm afraid I can only arrive at the conclusion that this particular definition of a test is fundamentally flawed (even starting from the fact that the claim of what is being defined is unrelated with the actual definition presented) and, in my view, would warrant serious reconsideration and refining.

8Jun/129

Standards and Best Practices in Testing

This is my response to two closely related texts written by James Christie. The first one is James' article Do standards keep testers in the kindergarten? and the second one is James' post titled ISO 29119 the new international software testing standard - what about it? at the Software Testing Club forums. Due to the texts being so closely related, I won't be commenting them in any particular order or referring to the exact source of each quote in detail.

Let's begin with two quotes, one from each text:

"Even if the standard's creators envisage a set of useful guidelines, from which people can select according to their need, the reality is that other people will interpret them as mandatory statements of “best practice”."

and

"I like for words to mean something. If it isn't really best, let's not call it best."

As a vapid cliche, Voltaire wrote: "The best is the enemy of the good".

One of the problems here, as so well put by Markus Ahnve (@mahnve on Twitter) at Turku Agile Day 2012 in Finland, is that "'best practices' are only guaranteed mediocrity". In order for something to qualify as a best practice it must, necessarily, forfeit context due to the vast diversity of software projects out there. In my view, that, on its own, is grounds enough for it to lose relevance and if it's not (fully) relevant in my context how can it be best practice for me? Either I'm missing out on something that matters, or I'm getting extra weight I really don't want or need.

Note that I'm intentionally linking 'best practices' with standards here since I really don't see much of a difference in between. Both seem, to me, as one-size-fits-all sets of rules that tell me how I should do things without even asking me what I'm actually doing.

As James hinted in the texts quoted above, standards, especially in software testing, would not be so much of a problem if people didn't take them - or expect them to be taken - like gospel. I believe the problem might, at least partially, boil down to the fact that it's easier to simplify things to vague generalizations than to try and account for all contexts (which would probably be impossible anyway).

People tend to be lazy so it should come as no surprise that there are those who would prefer to just resort to a standard rather than spend time thinking about what's best for the context at hand. With luck (from that person's perspective), this approach might even be seen as beneficial. People striving towards standards-compliance are obviously just being responsible and doing their very best to drive the project to optimal results, right?

Another potential problem with standards is, in my opinion, extremely well put in one of my favorite online comics: http://xkcd.com/927/

I don't know how to better put that to words than that.

"Obviously the key difference is that beginners do need some kind of structural scaffold or support; but I think we often fail to acknowledge that the nature of that early support can seriously constrain the possibilities apparent to a beginner, and restrict their later development."

I completely agree with James here. Maybe a stupid analogy but you don't buckle toddlers up to braces when they're trying to learn how to walk. You encourage them, you give them a taste of what walking is like by supporting them for a while and then leaving them to their own devices to build up the muscles, balance and coordination required. You give them something to reach out for - quite literally - and let them work out a way to get there.

In the same spirit, I wouldn't want to restrain the learning of a beginning tester by dictating rules, requirements and restrictions. I share my own experiences, I point them to various authors, books, forums and blogs and let them work things out from there, giving advice when they ask for it or are obviously lost.

"the real problem is neither the testers who want to introduce standards to our profession, nor the standards themselves. The problem lies with the way that they are applied."

I wouldn't leave the people wanting to introduce standards to software testing out of the picture since demand drives supply (or vice versa if your marketing is good enough). The problem here, as I see it, is the way how a lot people just wait to be commanded and when they receive recommendations they interpret them as absolutes ie. commandments instead. I believe this is strongly cultural and psychological.

Unfortunately, in my experience, this seems to have nothing to do with a person's position in a company or in the society in general. These people strive to memorize and follow the rules to the letter instead of trying to understand and apply them in a way that fits the current context. Ever heard of anyone going "by the book"? Yeah, me too, and I find the amount of such people disconcerting.

I'm going to stray from the subject for a bit but, since this is closely related to the interpretations, I think it's relevant and called for so bear with me. I've personally been involved in a number of projects where the so-called agile project model has actually been waterfall from start to finish and for no other reason than certain key people's strict adherence to the "rules" of agile. When those people are in a position of authority that by-the-book approach can wreak havoc all across the company while appearing beneficial at a quick glance. Being standards-compliant can never be a bad thing, right? CMMI level 5 or bust and all that.

I'll give you a hint: there will be no prince(ss) at the end of the dungeon but by the time you get there you will have created your own end-of-level boss to fight.

For me, the very first thing I ever learned about agile was: "adjust it to your needs". Incorporate the parts that serve your purpose and context and leave out the ones that are a hindrance. It's called agile for a reason so think for yourself because it's your context. The obvious problem here, of course, is the fact that you should have a solid grasp of the underlying ideas and principles or the advice is likely to backfire.

I do believe standards can be a good thing - even in software testing - if used constructively, as a support, instead of being taken as compulsory mandates that disallow any and all deviation from the norm. The problem here, as James mentioned, is the fact that people easily take the word "standard" as a synonym of "must".

Testing is a creative mental process closer to art than mechanical engineering. It most certainly isn't a repetitive conveyor belt performance as some people would like others to think (mostly out of pure ignorance, I'm guessing). If painting or composing were governed by strict standards the end results would probably be pretty darn boring and bland.

22May/1210

Physics, Testing and Academia

The Introduction

I just read the Science Daily article: Quantum Physicists Show a Small Amount of Randomness Can Be Amplified Without Limit and, while reading, it occurred to me that quantum physics seems a bit like the context-driven approach to testing. Consider, for example, the following claims:

  • there is an element of uncertainty present at all times - we can not know all affecting variables
  • there is an element of randomness, beyond our control as observers, present in any system
  • it is unlikely we could ever have complete knowledge of the state of a system being observed

Sound familiar? Similarly, I find it relatively easy to (loosely) link classical physics with the more scholarly approaches to testing. Again, consider the following:

  • all effects are predictable and can be accurately represented in numbers and formulas
  • the state of a system being observed can be accurately measured at any point in time
  • standardization is valuable, and necessary

Exact, deterministic and absolute. Unfortunately, that leads me into the rant part.

The Rant

As I recently discussed with a PhD-in-physics friend of mine, the biggest problem I have with the way many people of science approach, well anything, is that the approach is often so absolute and unswerving. "There is no 'maybe' or 'what if', the formula shows how it is. Who are you to question Newton or Einstein?" To me, this seems hierarchical, authoritative, stagnatory, unimaginative, stubborn, cynical, naïve even.

When I was a kid I always thought the purpose of a scientist was to explore new possibilities and alternatives, not to dismiss them as irrelevant or stupid without a moment's thought if they disagreed with the leading views. I used to think that the purpose of a scientist was to constantly question everything in order to learn and to improve, not to meekly accept the Word of Truth(TM) handed down by higher authority figures. Unfortunately, based on approximately 20 years of observing people of academia, that does not seem to be the case very often.

I find it highly frustrating when a casual conversation first turns into an interesting debate and then rapidly degrades to almost religious raving when the opponent flatly refuses to even consider the possibility that there might be some level of uncertainty involved or that maybe the current facts of science related to the subject are actually no facts at all. Maybe they're just best guesses that were made based on our current level of understanding and limited by the sensitivity of the equipment used to measure. 3000 years ago scientists knew the Earth is flat. 100 years ago scientists knew that the speed of sound can not be exceeded. I don't think many people would share those notions now.

The Finale

What I'm trying to say here is that I, personally, would immensely enjoy seeing (more of) the people of hard sciences go into the critical questioning mode a little easier and just a tad more often. You don't lose your face or professional credibility for stating "we might not have perfected this yet", or "I wonder if it would be possible we've gotten this all wrong".

Done.

PS. Do note that I'm in no way belittling or disparaging natural sciences themselves. I wouldn't be here writing this rant without the practical applications and solutions that mathematics, chemistry and physics have brought along. I only criticize the overly fixed mindset of (some of) the people.

4Mar/123

Schools of (pro)testing

I have been discussing Cem Kaner's announcement and the separation between the founders of the context-driven school of testing with some people, most of whom proclaim themselves as context-driven (myself included). This post was inspired by Cem's response to the responses (confusing, eh?) he got for the original announcement.

I must say I like Cem's way of thinking here as it would appear to me to be humane, and non-exclusive. I like that because it's really close to my own world view and way of thinking. Here are some comments to what Cem wrote:

"An analogy to religion often carries baggage: Divine sources of knowledge; Knowledge of The Truth; Public disagreement with The Truth is Heresy; An attitude that alternative views are irrelevant; An attitude that alternative views are morally wrong."

This is why I align myself rather closely with nontheistic Buddhist views and why Dalai lama is my idol. This is also something I wrote about in our discussions with the above-mentioned group of people. Here's what Dalai lama has to say about religions:

"I always believe that it is much better to have a variety of religions, a variety of philosophies, rather than one single religion or philosophy. This is necessary because of the different mental dispositions of each human being. Each religion has certain unique ideas or techniques, and learning about them can only enrich one's own faith."
--His Holiness the 14th Dalai lama

As I wrote in our discussion, I don't see any difficulty applying this more generally even within a single school - be it religion, philosophy or software testing. As long as there are opposing views and we keep our own thinking critical - especially concerning our own views and opinions - they can only help us become better and stronger by, for example, teaching us how many different angles there can be to viewing the same concepts/thoughts/ideas/practices/whatever.

You can't credibly claim your favorite color is blue if you haven't experienced red, green and yellow as well.

This is exactly why I am curious about the people who genuinely, for example, consider ISTQB certification a good idea. I wish to learn about their motives, their way of thinking and their reasoning, rather than just outright disregard their views as stupid, ignorant or irrelevant. That would be arrogant, inhumane and unfair. I don't need to agree with a view to be able to acknowledge the value of enthusiasm and sincerity (even if unfounded or misguided). Note that I'm talking about the people, not the certification, of which I have less than favorable opinions.

As Cem wrote, controversy is a good thing. It's just that there are constructive ways of dealing with controversy and then there are destructive ways of dealing with it and probably any number of ways that are somewhere in between. The Buddhist approach, in my view, is constructive. The purpose is to have your own view and then refine it by learning from others while accepting their right to differing views and different paths to learning. This approach can help you uncover faults in your own thinking you might never have realized had you not been dealing with people whose views disagree with your own.

This, by the way, is perfectly in line with People's Assertive Rights (that I feel everyone should know about and would benefit from embracing) but I won't go deeper into that here.

"In my view, there are legitimate differences in the testing community. I think that each of the major factions in the testing community has some very smart people, of high integrity, who are worth paying attention to. I’ve learned a lot from people who would never associate themselves with context-driven testing."

I pretty much already covered this above but as an addition I would take the analogy of the spectrum of colors:

Think of the spectrum as comprising the entire testing community, everyone in it. Now, think of that spectrum as being arbitrarily split into smaller sections, or "schools" of testing. Undoubtedly, there are people who would like to over-simplify things by assigning a single color to each of these sections. "This is the blue section", "this is the green section" and so on. Considering the generally acknowledged "schools" of testing that would give us what, 5-7 different colors? I don't know about anyone else but I think that would make a rather poor representation of the beauty of the spectrum (is that a straw man? I would like to think it's not since I don't mean to attack anyone but, rather, just clarify my own views on the subject). Kind of like the shadow at the back of a cave as a representation of complex three-dimensional objects in Plato's Allegory of the Cave.

The reality is that even if the spectrum is arbitrarily split into smaller sections, the color slide of the spectrum does not stop within any one of those sections. What this means in practice is that people within a single school will still have different views even if they generally adhere to a similar way of thinking on a broad scale. Fuchsia is still red even if it isn't scarlet (though some people might argue that fuchsia is closer to blue and not completely unjustly so). There are those who would be considered analytical by the people in the context-driven school but context-driven in the analytical school.

My point here is that it would be unrealistic, naive even, to think that every proponent or representative of a specific school of testing would unilaterally agree on everything. In my view, it would be much wiser and better for the community as a whole to embrace the variety (and controversy) than try to force people into a single mold.

"One of the profound ideas in those courses was a rejection of the law of the excluded middle. According to that law, if A is a proposition, then A must be true or Not-A must be true (but not both). Some of the Indian texts rejected that. They demanded that the reader consider {A and Not-A} and {neither A nor Not-A}."

I love this. This, I believe, is exactly the kind of thinking that is referred to when talking about "thinking outside the box". I challenge, question and protest against any artificial boundaries and restrictions. Especially in software testing.

By the way, when I do that, the most typical response I get is: "nobody would ever do that!"

Sound familiar?

25Dec/110

Of Algorithms and Heuristics

It occurred to me, while reading James Bach's blog post Exploratory Testing is not “Experienced-Based” Testing, that I wasn't 100% sure I understood the difference between an algorithm and a heuristic. I did have a general idea but that wasn't enough; I wanted the gory details. So I decided to read up on them and that sparked the idea to blog about the subject in order to expose my understanding of the two for critique so that any potential misunderstandings can be corrected.

Do note that this post is not objective in the application of the two (in testing) and I'm not even trying to make it so.

Algorithms

Wikipedia defines an algorithm as:

In mathematics and computer science, an algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. Algorithms are used for calculation, data processing, and automated reasoning. In simple words an algorithm is a step-by-step procedure for calculations.

Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, will proceed through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state.

Immediately, in my mind, this doesn't chime very well with testing at all. Sure, algorithms sound like the perfect tool to use when making (automated) checks but for the sapient process of exploration and learning that is testing, the above definition just doesn't cut it. Below, I have listed some reasons why (I'm sure there are plenty more but these are just the ones that immediately occurred to me):

  1. An algorithm depends on a "finite list of well-defined instructions"
  2. An algorithm depends on the execution being deviation-free, as in, all successive states must be known beforehand and each identical run must produce identical results
  3. As a continuation to the previous point, an algorithm expects the ending state (and ending point!) to always be known regardless of inputs used, execution paths taken or external effects experienced
  4. Finally, "automated reasoning" does not sound like something I would want my product to depend on!

Somehow, all of the points above seem to perfectly match with the "traditional" test case approach used in so many (software) projects around the world. You know, the one where someone creates a set of test cases with exact steps to take, specific inputs to enter and detailed results to expect. The one where, after that, some unfortunate tester has to run through that set with military discipline while being denied the advantage of his or her testing experience, intuition, domain knowledge, imagination and creativity - among other beneficial qualities that might help produce good results.

Heuristics

I found Wikipedia's definition of a heuristic to be unsatisfactory as, for example, it refers to "common sense" in the definition which I dislike since it is as meaningless a term as "best practice", so here's Merriam-Webster's (better, in my opinion) definition instead:

: involving or serving as an aid to learning, discovery, or problem-solving by experimental and especially trial-and-error methods <heuristic techniques> <a heuristic assumption>; also : of or relating to exploratory problem-solving techniques that utilize self-educating techniques (as the evaluation of feedback) to improve performance <a heuristic computer program>

To me, this sounds like a much better match with testing (with which I still refer to the sapient process of exploration and learning). As the definition states:

  1. Heuristics are "an aid"
  2. Heuristics are a trial-and-error ie. fallible methods for finding (satisfactory) solutions
  3. Heuristics can be used as an aid for self-education

All of the three points above seem to me to reinforce and support good testing and the effort to become a better tester. One crucial point as to why, in my opinion, is the fact that the definition of a heuristic includes uncertainty in it that the definition of an algorithm is doing its best to take away.

I find it highly unlikely we could ever know all affecting factors at play when testing so I dare claim the uncertainty is built into the very essence of all testing and an attempt to remove that seems to me like an attempt to lull oneself into a false sense of security and a dangerous, unfounded confidence that should not be there.

As Michael Bolton said during the first Rapid Software Testing course in Helsinki: "We testers are in the (false) confidence-demolition business". Hear hear.

3Jun/110

ParkCalc done differently

The third testing dojo was organized on Tuesday, May 31st in Helsinki. It was also the last testing dojo before summer break. We will be back in August!

In the earlier testing dojos the format was really simple: Pairwise exploratory testing of selected application with short debriefings after each 5-minute session, a break in the middle and then more testing before wrap-up and free conversation. Based on the feedback received, some people thought this boring and I can easily see why since people only got to participate actively for a total of 10 minutes (5 minutes recording and 5 minutes testing) during the entire 3-hour dojo and the rest of the time they were only allowed to observe.

While I do like this format as well, I felt like it needs something more to keep the event interesting and engaging. So, this time we did things in a completely different manner.

Changes in the programme

I had a couple of ideas before the dojo and only decided on the way we were going to go this time mere 5 minutes before session start. Here's what we did in the third testing dojo:

Participants were divided into 3 groups of 3-4 people each. Then the application to be tested was introduced (the infamous Gerald R. Ford International Airport parking calculator aka ParkCalc). The mission was: "You have been told to test this application. Each group will have about 15 minutes for the testing so you know you will not have enough time to cover everything. Come up with ideas on how you feel you could best test the application within the given time".

After 15 minutes of discussion (and after emphasizing that people are free to collaborate between groups as well) the groups were told to select the 3 ideas they saw as the most important ones and write them on a piece of paper. Since I am a mean, mean person, there was a twist at this point: I collected the papers, shuffled them and then gave them back so that no group would be testing their own top-3 ideas.

At this point we had a break because I wanted to kind of silently encourage people to use all the time available for collaboration. Plus, I had told the participants the same thing Michael Bolton told us during Rapid Software Testing course I had attended a week before: it's okay to cheat. In this context: if you have limited time for preparations, it's okay to take advantage of any time that you have together - and I'm happy to say people did that, too!

After the break the groups had 5 minutes each for asking clarifying questions from the group that had prepared the ideas, in order to gain a common understanding of what was wanted and to be able to test those ideas effectively. Then we started the actual testing. We had a brief discussion on how the groups would like to perform the test execution and it was decided that one person from each group sits at the computer and the other members feed him/her ideas and suggestions on what to test next. At this point we were a little over halfway through the time available for the dojo and I was very happy that the time had been spent on continuous collaboration within and between the groups.

Each group had about 15 minutes for test execution, constantly telling other participants what it was they were doing and why. After each session the group that had prepared the ideas got to assess the work done and decide if their requirements had been met or not.

After the test execution rounds and debriefings it was time for sauna and free conversation.

Feedback

At the end of the dojo, I asked people to give feedback about the new format and the dojo in general and, as promised, here are the comments received (exactly as written):

- "A (really) short introduction on the purpose of exploratory testing in the beginning would be nice" (We had 4 new people participating in the dojo and I was so concentrated on refining the last details of the new format in my mind that I completely forgot this. Sorry, my bad!)
- "The group was diverse and brought / came with intresting ideas"
- "I don't know if there should have been more moderation for some of the discussions... I'm knew to this format"
- "3 topics in 15 mins means when you find something - need to move on"
- "Planning was good, many ideas"
- "+ Group planning"
- "+ Testing other group's plan"
- "+ Open discussion"
- "Idea connected to lightning talks: have someone give a presentation on some techniques and then try those"
- "Planning in the groups was nice"
- "Give little different focus for each group so that they can have more "fresh" look at the program"
- "Nice session, format worked quite well. Liked the idea of "management team""
- "A less obvious target would've been more interesting"
- "Sandwiches, even for vegetarians"
- "The welcome was a bit "cold", the person opening the door didn't even know about the dojo" (Again, my bad, I should have been at the door earlier myself. Sorry!)
- "Maybe a little more "intimate" time with the software would be good"
- "Bad design leads to people designing, not testing"
- "More focus on methods, easy to start chasing results"
- "Simple example helps keep focus"
- "Hands need to get more dirty"
- "Planning too long w/o access to spec or the software. Less time on planning & two rounds of testing (fb taken into account) would be nice"
- "Prioritizing bugs for their relevance would have been a fun exercise" (We did this in a previous dojo - need to incorporate it to this new format too!)
- "No focus on debriefing / reporting - should practice that"
- "Fixed ideas for third group gets a bit boring..."
- "More active session than previous times"
- "Deviding people in groups makes sense"
- "More communication & collaboration"
- "Better ideas generation etc..."
- "While a group is "performing" other groups may be bored / not follow (maybe it would make sense to keep them "actively" busy)"
- "Continue being proactive and organize those! Thats awesome!"
- "Discussion format(?) Result of the exercise? What was the conclusion? What did overlap / wasn't done? How do we are more efficient next time?"
- "More examples. Examples sell better than reason and to sell is what exp. testing needs"
- "Introduction?"
- "Less arguing, less battle of egos, more possibilities for creative thinking (is there really right or wrong here?)"
- "More "important" sw to test. Now people were thinking less of their testing, not yielding everything"
- "+ Groups nice change; less chaotic."
- "Too simple app?"
- ""Management" feedback idea needs improvement."
- "Good: Short missions = structure of dojo (ideas, prioritizing, questions etc.)"
- "Good: Discussion (with whole group & small team)"
- "Missing: Not everybody could do testing"
- "Perhaps reserve time for "best practices discussion""

It was great getting this much feedback. They will be taken into account when planning for the next testing dojo!

13May/112

Tea-time with Testers

New software testing ezine

The fourth issue of a new software testing ezine called "Tea-time with Testers" was released earlier this week. It is an interesting ezine - especially in that only a total of four issues have been published so far but it has just about the strongest possible support from the top of the testing world. Earlier issues have articles from the likes of Jerry Weinberg, James Bach, Markus Gärtner and so many other great testers from around the world that it is just incredible. In my opinion, that is quite an achievement for such a fresh endeavor. Plus, I feel truly honored to have been asked to participate.

My article is about test cases in agile. I chose that topic because of one particularly active LinkedIn discussion that went on for almost a year. I had considered blogging about my thoughts on the discussion but when Lalitkumar Bhamare (one of the two founders and editors of the ezine) asked me if I would be interested to write an article in the May issue of Tea-time with Testers I happily accepted and decided to share my thoughts in the article instead.

Here is a link to the fourth issue of "Tea-time with Testers". My article starts on page 32.

Self-criticism

I am very critical about what I write in situations like this. It is my first publication in a serious testing ezine, after all. Still, almost immediately after submitting the article to Lalitkumar, I noticed a need for improvements. Specifically in my description of the way I prefer writing test cases. My (poor) excuse is that I finished the article at around 4am the night before it had to be submitted so I was no longer thinking straight at that point. Of course, to be fair towards myself, it was the best I could come up with in such a short time.

Anyway, in addition to what I wrote in the article I would like to add the following:

I call the method iterative because I am basically doing the exact same thing the developers do but with test cases. I only write down ideas for those tests that I feel will be needed during that iteration. I only elaborate on the test cases that absolutely need more information and, like I mentioned earlier, I only write just enough to make the tests useful. I do not wish to hinder any tester's intuition or will to explore by setting up unnecessary rules and limitations. Later on, I will refactor the test cases based on current needs if and when requirements change.

This not only saves me time and effort but it also makes it easier for me to maintain the test cases. Plus, should a new tester join the project later on, it is easier for me to rapidly convey the essence of the test ideas to that person without bogging down his/her learning of the system with minor details that have no relevance to the person yet at that point. Add continous, transparent communication on top of that, which helps prevent misunderstandings, and I am left wondering why did I not realize things could be done this way already years earlier.

By the way, my favorite tea is camomile. I lllove it!

29Mar/110

Why it is good to cut some slack

Lazy, lazy

Little over a year ago, I had a slow day at work. There was not much to do and even the few tasks I had were not of high priority. I was a bit bored since it was too early to go home and I didn't feel comfortable playing some game or browsing Facebook during working hours. You know, impressions. So I decided to start browsing LinkedIn discussion groups for some questions I would be able to answer or some new ideas or techniques I could use in my work.

I had never really been very active in LinkedIn before apart from updating my profile every once in a while to keep it up to date but this time something clicked. I started to write actively to various discussion groups related to testing, test management, agile and so on. Some people agreed with my views, some disagreed, some asked for more information and were curious to know more about the way I see things. So I kept participating.

Turning point

After a couple of months of active writing, I came across a discussion thread about reducing the cost of software testing. It had been going on for a good while and it was gathering a lot of attention and participants. There were plenty of very good advices on how companies could cut down on the costs related to software testing without compromising the quality of their product. I felt compelled to share some points and views from my own working experience, plus there were some points that I disagreed with and felt I wanted to counter-argument.

A few weeks later I was approached by Govind Kulkarni via email, asking me if I was interested in participating in a book project that was meant to collect a number of testers from around the world, to share their views on the topic of reducing the costs of software testing. Naturally, I was interested - perfect opportunity, who wouldn't be? So I said yes please, count me in. This resulted in an avalanche of events.

I got in contact with people like Matthew Heusser and Markus Gärtner, heard about Miagi-do School of Software Testing, got accepted as a brown-belt student and got invited to STPCon as a speaker about the book. I was also invited to the Rebel Alliance, an informal group of top-notch testers from around the world sharing ideas, helping other people learn about testing and just generally having fun together with like-minded people at conferences and other professional gatherings.

On top of all that, I got in contact with Tiina Kiuru when she organized the first testing dojo in Helsinki and got really excited about that. I had discussed testing dojos with Markus Gärtner and I was a bit sorry I hadn't been able to participate in any Weekend Testing sessions which are somewhat similar. I absolutely loved the idea so when I heard about the dojo I signed up immediately. This resulted in the two of us starting to work together to organize more testing dojos in Helsinki (keep posted, we have some big plans regarding this for the near future).

...Fast-forward 6 months...

After many way too long days at the office (a big Thank You to Uoma Oy for allowing me to work on my book chapter during office hours) and getting out of bed in the middle of the night to add or edit a sentence I just had to have there, the book chapter went through a couple of review rounds and was finally finished and sent to the publisher.

As an icing on the cake, I just received an email today stating that I have been selected as a candidate for Finnish Tester of the Year 2011 (only available in Finnish).

All of this because I had a slow day at the office about a year ago and decided to spend that 1 free hour doing something personally interesting. So the message is simple: Cut yourself some slack - it's not just refreshing, it could be more rewarding than you can imagine!


Compulsory shameless promotion section: The book release date will be September 7th, 2011. More information can be found on the publisher's website: How to Reduce the Cost of Software Testing.

11Nov/106

Testing Dojo, Helsinki, Nov 11th

UPDATE Dec 9th

Unfortunately I had to reschedule the testing dojo due to a work situation. Updated infos with a link to registration below:

The next testing dojo will be organized on Wednesday, December 15th, at 18:00-21:00 (with time for sauna and free conversation afterwards) in Uoma premises, as mentioned in the original post. For more information and registration, see: Testing Dojo II.

Introductions

Last night, I participated in my first-ever testing dojo, right here in Helsinki, Finland. The event was organized by Tiina Kiuru in Reaktor premises. We had a total of 12 participants in addition to Tiina, with people's positions ranging from developers through testers to managers in a multitude of different kinds of companies. I thought this was a very promising start since the variety of the people and positions was guaranteed to also offer a wide variety of perspectives and approaches to testing. We even had some people in the dojo saying that they've never really done any kind of testing but were curious to learn more. Hear hear, congratulations to those people for participating!

Testing, testing

The purpose of the dojo was to do exploratory testing in pairs on a specific application (Pixlr Editor in this case) in a tightly time-boxed manner (5 minutes per tester). The testing was performed with one person doing the actual testing and another person writing down notes on the findings, what's been tested and what could be tested further in the future. The person performing the testing was also responsible for walking the audience through with what (s)he was testing and why. At first, people in the audience made suggestions on what to test but it was then agreed that the audience should only ask questions and not interfere otherwise unless the tester pair got stuck on something or ran out of ideas on what to test next.

We had simple, varying missions to perform like:
- open a specific picture and try to make it look like a printed target picture
- test the filters, see if they behave consistently and if they could be abused somehow
- test the history, see how many actions it will record and if the actions are recorded consistently

After each 5-minute testing session we had a short debriefing on what was tested and what did we learn which, in some cases, gave us some good ideas on what the next person could start the testing with.

Recap

At the end of the 3-hour dojo we were handed colored post-it notes to write down notes on what was bad, what was good, what did we learn and if something could be improved in the future. Personally, I was absolutely delighted to see people were so enthusiastic about the concept and it showed in the notes as well; there were a lot of good comments on what people learned and how things could be improved. The open conversation and sauna afterwards were really nice too!

This being the first time for a lot of people (myself included) performing this kind of testing exercises there was a bit of disorder in the beginning but in my opinion we got things under control pretty nicely (like forbidding the audience from interfering, which was found to be distracting by many of the participants) and I'm positive we'll do much better in the next testing dojo.

What's next?

Talking about the next time, we agreed with Tiina that I will be organizing the next testing dojo with Tiina's help, and it will be organized in Uoma premises in Punavuori (Merimiehenkatu 36D, 3rd floor to be exact). The date for the next dojo is still open, but I would expect it to be sometime mid-December or so. We'll discuss the exact details with Tiina next week and I will post an update to the blog once we've made some decisions.