Pro-testing Against Bad Software Quality

6Apr/134

An Object Is What the Object Does – Or Is It?

It's been a while since my last blog post and earlier today I came across a post that started a thought process I just had to write down. This post is based on the blog entry titled "“Testing vs Checking” : ‘Tomato’ vs ‘Tomato’" by Mario Gonzalez of Ortask.

As a quick recap, the gist of Mario's blog post is to criticize the differentiation James Bach, Michael Bolton and, generally, the people aligned with the context-driven approach to testing make between the terms "testing" and "checking". He basically goes on to claim the whole differentiation is a moot point or even potentially harmful to the testing community, and its future, as a whole (in terms of adding confusion to a field already swamped with definitions). Simplified, but that is my interpretation of the text.

[UPDATE April 8th]: As pointed out by @JariLaakso in Twitter, not all context-driven testers necessarily make the distinction between testing and checking, so I can't really say "generally, the people aligned with..." as I am not aware of the exact number of people who actually do. I may have simply just communicated with people who do make the distinction so reformat that one sentence in your mind to your liking. [END UPDATE]

However, I am not going to go any deeper into that or any of the other problems I perceived while reading through the post, at least not for now - I'm sure other people will. Instead, I will concentrate on the one thing that bothered me the most at the time of reading it: Mario's definition of a test. So I decided to dissect it. Here is the definition he offered:

a test attempts to prove something about a system

First of all, that is not even a definition of a test - it is a definition of the purpose of a test. Let me clarify with an analogy. Consider this for a definition of a car:

A car is used to transport people and goods from one place to another

Now, based on that "definition" alone, answer me the following questions:

  1. What does a car look like?
  2. What principles and mechanisms does a car operate on?
  3. Under what conditions and circumstances can a car be used to transport people and goods from one place to another?

You can't, can you? That's because I haven't given you a definition of a car - only what it's typically used for. In other words:

Defining what an object does does not define what the object is.

While still lacking and incomplete, a definition of a car could be something like: "A car is a typically four-wheeled, land-based vehicle that usually operates on the principle of an internal combustion engine turning the energy contained within a liquid fuel into mechanical movement through a series of controlled explosions the pressure of which cause a crankshaft to rotate and apply torque to the vehicle's drive wheels".

While the non sequitur I pointed out above would be reason enough to stop going any further, I want to go through the definition (of the purpose of a test) for the sake of completeness:

  • "A test" - Now what is that? In this context the question is impossible to answer - Mario hasn't told us!
  • "attempts" - How does "a test" attempt anything? It's not a sentient being. It would seem to me it is the tester who is the one to make the attempt through performing a test, making observations, analyzing and interpreting data, behavior and results before, during and after performing a test.
  • "to prove" - What, exactly, constitutes proof? Here are some definitions of the term:
    • evidence or argument establishing a fact or the truth of a statement (Oxford dictionaries)
    • the cogency of evidence that compels acceptance by the mind of a truth or a fact (Merriam-Webster)
    • sufficient evidence or an argument for the truth of a proposition (Wikipedia)

Now, how does one arrive at "proof", based on the above definitions? An obvious problem that immediately comes to mind is that, in many cases, "truths" or "facts" are relative and dependent on a number of factors. Don't believe me? Well, this is the last sentence in this blog entry. True at the time of writing, but false only seconds later when I kept going.

Also, if you want to pursue the scientific angle, I don't think anyone would take the result of a single experiment as any kind of proof of anything. You would need to repeat the experiment multiple times (and, ideally, have an independent peer group do the same) in order for it to gain any kind of credibility but therein lies a problem: the conditions would need to be exactly the same every time and that is virtually impossible to achieve in software testing. The date and time change, the amount of available CPU power, RAM and hard disk space varies, there might be network congestion or packet loss that you can not possibly predict, another application might suddenly misbehave and distort your test results or any number of other, unknown, factors that can affect the outcome of a test could manifest themselves.

It would seem to me that "proof" is much too strong a word to use in this context. Testing may suggest behavior that appears consistent but it can not prove that any more than a turkey being generously fed on a daily basis can predict that on the 180th day of its life the farmer comes out with an axe instead of seeds and takes the turkey's life instead of feeding it.

On with the definition:

  • "something" - Anything? Well, Mario did elaborate on this in the following paragraph so I'll just leave it at that.
  • "about a system" - Now there's another interesting word: "system". Various definitions to follow:
    • 1 a set of things working together as parts of a mechanism or an interconnecting network; a complex whole, or
    • 2 a set of principles or procedures according to which something is done; an organized scheme or method (Oxford dictionaries)
    • a group of devices or artificial objects or an organization forming a network especially for distributing something or serving a common purpose (Merriam-Webster)
    • a set of interacting or interdependent components forming an integrated whole or a set of elements (often called 'components') and relationships which are different from relationships of the set or its elements to other elements or sets (Wikipedia)

Complexity. On multiple levels. Especially when talking about computers and software. What if there is a fault in the hardware such that it exhibits certain behavior at one time but a different behavior at other times? Maybe a capacitor is nearing the end of its life and causes a test to give different results based on the ambient temperature of the environment in which the system resides. How many are prepared to, or even capable of, anticipating and accounting for something like that?

I'm not even trying to be funny here - my stereo amplifier does this and for exactly that reason!

Based on all of the above, I'm afraid I can only arrive at the conclusion that this particular definition of a test is fundamentally flawed (even starting from the fact that the claim of what is being defined is unrelated with the actual definition presented) and, in my view, would warrant serious reconsideration and refining.

8Jun/129

Standards and Best Practices in Testing

This is my response to two closely related texts written by James Christie. The first one is James' article Do standards keep testers in the kindergarten? and the second one is James' post titled ISO 29119 the new international software testing standard - what about it? at the Software Testing Club forums. Due to the texts being so closely related, I won't be commenting them in any particular order or referring to the exact source of each quote in detail.

Let's begin with two quotes, one from each text:

"Even if the standard's creators envisage a set of useful guidelines, from which people can select according to their need, the reality is that other people will interpret them as mandatory statements of “best practice”."

and

"I like for words to mean something. If it isn't really best, let's not call it best."

As a vapid cliche, Voltaire wrote: "The best is the enemy of the good".

One of the problems here, as so well put by Markus Ahnve (@mahnve on Twitter) at Turku Agile Day 2012 in Finland, is that "'best practices' are only guaranteed mediocrity". In order for something to qualify as a best practice it must, necessarily, forfeit context due to the vast diversity of software projects out there. In my view, that, on its own, is grounds enough for it to lose relevance and if it's not (fully) relevant in my context how can it be best practice for me? Either I'm missing out on something that matters, or I'm getting extra weight I really don't want or need.

Note that I'm intentionally linking 'best practices' with standards here since I really don't see much of a difference in between. Both seem, to me, as one-size-fits-all sets of rules that tell me how I should do things without even asking me what I'm actually doing.

As James hinted in the texts quoted above, standards, especially in software testing, would not be so much of a problem if people didn't take them - or expect them to be taken - like gospel. I believe the problem might, at least partially, boil down to the fact that it's easier to simplify things to vague generalizations than to try and account for all contexts (which would probably be impossible anyway).

People tend to be lazy so it should come as no surprise that there are those who would prefer to just resort to a standard rather than spend time thinking about what's best for the context at hand. With luck (from that person's perspective), this approach might even be seen as beneficial. People striving towards standards-compliance are obviously just being responsible and doing their very best to drive the project to optimal results, right?

Another potential problem with standards is, in my opinion, extremely well put in one of my favorite online comics: http://xkcd.com/927/

I don't know how to better put that to words than that.

"Obviously the key difference is that beginners do need some kind of structural scaffold or support; but I think we often fail to acknowledge that the nature of that early support can seriously constrain the possibilities apparent to a beginner, and restrict their later development."

I completely agree with James here. Maybe a stupid analogy but you don't buckle toddlers up to braces when they're trying to learn how to walk. You encourage them, you give them a taste of what walking is like by supporting them for a while and then leaving them to their own devices to build up the muscles, balance and coordination required. You give them something to reach out for - quite literally - and let them work out a way to get there.

In the same spirit, I wouldn't want to restrain the learning of a beginning tester by dictating rules, requirements and restrictions. I share my own experiences, I point them to various authors, books, forums and blogs and let them work things out from there, giving advice when they ask for it or are obviously lost.

"the real problem is neither the testers who want to introduce standards to our profession, nor the standards themselves. The problem lies with the way that they are applied."

I wouldn't leave the people wanting to introduce standards to software testing out of the picture since demand drives supply (or vice versa if your marketing is good enough). The problem here, as I see it, is the way how a lot people just wait to be commanded and when they receive recommendations they interpret them as absolutes ie. commandments instead. I believe this is strongly cultural and psychological.

Unfortunately, in my experience, this seems to have nothing to do with a person's position in a company or in the society in general. These people strive to memorize and follow the rules to the letter instead of trying to understand and apply them in a way that fits the current context. Ever heard of anyone going "by the book"? Yeah, me too, and I find the amount of such people disconcerting.

I'm going to stray from the subject for a bit but, since this is closely related to the interpretations, I think it's relevant and called for so bear with me. I've personally been involved in a number of projects where the so-called agile project model has actually been waterfall from start to finish and for no other reason than certain key people's strict adherence to the "rules" of agile. When those people are in a position of authority that by-the-book approach can wreak havoc all across the company while appearing beneficial at a quick glance. Being standards-compliant can never be a bad thing, right? CMMI level 5 or bust and all that.

I'll give you a hint: there will be no prince(ss) at the end of the dungeon but by the time you get there you will have created your own end-of-level boss to fight.

For me, the very first thing I ever learned about agile was: "adjust it to your needs". Incorporate the parts that serve your purpose and context and leave out the ones that are a hindrance. It's called agile for a reason so think for yourself because it's your context. The obvious problem here, of course, is the fact that you should have a solid grasp of the underlying ideas and principles or the advice is likely to backfire.

I do believe standards can be a good thing - even in software testing - if used constructively, as a support, instead of being taken as compulsory mandates that disallow any and all deviation from the norm. The problem here, as James mentioned, is the fact that people easily take the word "standard" as a synonym of "must".

Testing is a creative mental process closer to art than mechanical engineering. It most certainly isn't a repetitive conveyor belt performance as some people would like others to think (mostly out of pure ignorance, I'm guessing). If painting or composing were governed by strict standards the end results would probably be pretty darn boring and bland.

4Mar/123

Schools of (pro)testing

I have been discussing Cem Kaner's announcement and the separation between the founders of the context-driven school of testing with some people, most of whom proclaim themselves as context-driven (myself included). This post was inspired by Cem's response to the responses (confusing, eh?) he got for the original announcement.

I must say I like Cem's way of thinking here as it would appear to me to be humane, and non-exclusive. I like that because it's really close to my own world view and way of thinking. Here are some comments to what Cem wrote:

"An analogy to religion often carries baggage: Divine sources of knowledge; Knowledge of The Truth; Public disagreement with The Truth is Heresy; An attitude that alternative views are irrelevant; An attitude that alternative views are morally wrong."

This is why I align myself rather closely with nontheistic Buddhist views and why Dalai lama is my idol. This is also something I wrote about in our discussions with the above-mentioned group of people. Here's what Dalai lama has to say about religions:

"I always believe that it is much better to have a variety of religions, a variety of philosophies, rather than one single religion or philosophy. This is necessary because of the different mental dispositions of each human being. Each religion has certain unique ideas or techniques, and learning about them can only enrich one's own faith."
--His Holiness the 14th Dalai lama

As I wrote in our discussion, I don't see any difficulty applying this more generally even within a single school - be it religion, philosophy or software testing. As long as there are opposing views and we keep our own thinking critical - especially concerning our own views and opinions - they can only help us become better and stronger by, for example, teaching us how many different angles there can be to viewing the same concepts/thoughts/ideas/practices/whatever.

You can't credibly claim your favorite color is blue if you haven't experienced red, green and yellow as well.

This is exactly why I am curious about the people who genuinely, for example, consider ISTQB certification a good idea. I wish to learn about their motives, their way of thinking and their reasoning, rather than just outright disregard their views as stupid, ignorant or irrelevant. That would be arrogant, inhumane and unfair. I don't need to agree with a view to be able to acknowledge the value of enthusiasm and sincerity (even if unfounded or misguided). Note that I'm talking about the people, not the certification, of which I have less than favorable opinions.

As Cem wrote, controversy is a good thing. It's just that there are constructive ways of dealing with controversy and then there are destructive ways of dealing with it and probably any number of ways that are somewhere in between. The Buddhist approach, in my view, is constructive. The purpose is to have your own view and then refine it by learning from others while accepting their right to differing views and different paths to learning. This approach can help you uncover faults in your own thinking you might never have realized had you not been dealing with people whose views disagree with your own.

This, by the way, is perfectly in line with People's Assertive Rights (that I feel everyone should know about and would benefit from embracing) but I won't go deeper into that here.

"In my view, there are legitimate differences in the testing community. I think that each of the major factions in the testing community has some very smart people, of high integrity, who are worth paying attention to. I’ve learned a lot from people who would never associate themselves with context-driven testing."

I pretty much already covered this above but as an addition I would take the analogy of the spectrum of colors:

Think of the spectrum as comprising the entire testing community, everyone in it. Now, think of that spectrum as being arbitrarily split into smaller sections, or "schools" of testing. Undoubtedly, there are people who would like to over-simplify things by assigning a single color to each of these sections. "This is the blue section", "this is the green section" and so on. Considering the generally acknowledged "schools" of testing that would give us what, 5-7 different colors? I don't know about anyone else but I think that would make a rather poor representation of the beauty of the spectrum (is that a straw man? I would like to think it's not since I don't mean to attack anyone but, rather, just clarify my own views on the subject). Kind of like the shadow at the back of a cave as a representation of complex three-dimensional objects in Plato's Allegory of the Cave.

The reality is that even if the spectrum is arbitrarily split into smaller sections, the color slide of the spectrum does not stop within any one of those sections. What this means in practice is that people within a single school will still have different views even if they generally adhere to a similar way of thinking on a broad scale. Fuchsia is still red even if it isn't scarlet (though some people might argue that fuchsia is closer to blue and not completely unjustly so). There are those who would be considered analytical by the people in the context-driven school but context-driven in the analytical school.

My point here is that it would be unrealistic, naive even, to think that every proponent or representative of a specific school of testing would unilaterally agree on everything. In my view, it would be much wiser and better for the community as a whole to embrace the variety (and controversy) than try to force people into a single mold.

"One of the profound ideas in those courses was a rejection of the law of the excluded middle. According to that law, if A is a proposition, then A must be true or Not-A must be true (but not both). Some of the Indian texts rejected that. They demanded that the reader consider {A and Not-A} and {neither A nor Not-A}."

I love this. This, I believe, is exactly the kind of thinking that is referred to when talking about "thinking outside the box". I challenge, question and protest against any artificial boundaries and restrictions. Especially in software testing.

By the way, when I do that, the most typical response I get is: "nobody would ever do that!"

Sound familiar?

25Dec/110

Of Algorithms and Heuristics

It occurred to me, while reading James Bach's blog post Exploratory Testing is not “Experienced-Based” Testing, that I wasn't 100% sure I understood the difference between an algorithm and a heuristic. I did have a general idea but that wasn't enough; I wanted the gory details. So I decided to read up on them and that sparked the idea to blog about the subject in order to expose my understanding of the two for critique so that any potential misunderstandings can be corrected.

Do note that this post is not objective in the application of the two (in testing) and I'm not even trying to make it so.

Algorithms

Wikipedia defines an algorithm as:

In mathematics and computer science, an algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. Algorithms are used for calculation, data processing, and automated reasoning. In simple words an algorithm is a step-by-step procedure for calculations.

Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, will proceed through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state.

Immediately, in my mind, this doesn't chime very well with testing at all. Sure, algorithms sound like the perfect tool to use when making (automated) checks but for the sapient process of exploration and learning that is testing, the above definition just doesn't cut it. Below, I have listed some reasons why (I'm sure there are plenty more but these are just the ones that immediately occurred to me):

  1. An algorithm depends on a "finite list of well-defined instructions"
  2. An algorithm depends on the execution being deviation-free, as in, all successive states must be known beforehand and each identical run must produce identical results
  3. As a continuation to the previous point, an algorithm expects the ending state (and ending point!) to always be known regardless of inputs used, execution paths taken or external effects experienced
  4. Finally, "automated reasoning" does not sound like something I would want my product to depend on!

Somehow, all of the points above seem to perfectly match with the "traditional" test case approach used in so many (software) projects around the world. You know, the one where someone creates a set of test cases with exact steps to take, specific inputs to enter and detailed results to expect. The one where, after that, some unfortunate tester has to run through that set with military discipline while being denied the advantage of his or her testing experience, intuition, domain knowledge, imagination and creativity - among other beneficial qualities that might help produce good results.

Heuristics

I found Wikipedia's definition of a heuristic to be unsatisfactory as, for example, it refers to "common sense" in the definition which I dislike since it is as meaningless a term as "best practice", so here's Merriam-Webster's (better, in my opinion) definition instead:

: involving or serving as an aid to learning, discovery, or problem-solving by experimental and especially trial-and-error methods <heuristic techniques> <a heuristic assumption>; also : of or relating to exploratory problem-solving techniques that utilize self-educating techniques (as the evaluation of feedback) to improve performance <a heuristic computer program>

To me, this sounds like a much better match with testing (with which I still refer to the sapient process of exploration and learning). As the definition states:

  1. Heuristics are "an aid"
  2. Heuristics are a trial-and-error ie. fallible methods for finding (satisfactory) solutions
  3. Heuristics can be used as an aid for self-education

All of the three points above seem to me to reinforce and support good testing and the effort to become a better tester. One crucial point as to why, in my opinion, is the fact that the definition of a heuristic includes uncertainty in it that the definition of an algorithm is doing its best to take away.

I find it highly unlikely we could ever know all affecting factors at play when testing so I dare claim the uncertainty is built into the very essence of all testing and an attempt to remove that seems to me like an attempt to lull oneself into a false sense of security and a dangerous, unfounded confidence that should not be there.

As Michael Bolton said during the first Rapid Software Testing course in Helsinki: "We testers are in the (false) confidence-demolition business". Hear hear.

11Nov/106

Testing Dojo, Helsinki, Nov 11th

UPDATE Dec 9th

Unfortunately I had to reschedule the testing dojo due to a work situation. Updated infos with a link to registration below:

The next testing dojo will be organized on Wednesday, December 15th, at 18:00-21:00 (with time for sauna and free conversation afterwards) in Uoma premises, as mentioned in the original post. For more information and registration, see: Testing Dojo II.

Introductions

Last night, I participated in my first-ever testing dojo, right here in Helsinki, Finland. The event was organized by Tiina Kiuru in Reaktor premises. We had a total of 12 participants in addition to Tiina, with people's positions ranging from developers through testers to managers in a multitude of different kinds of companies. I thought this was a very promising start since the variety of the people and positions was guaranteed to also offer a wide variety of perspectives and approaches to testing. We even had some people in the dojo saying that they've never really done any kind of testing but were curious to learn more. Hear hear, congratulations to those people for participating!

Testing, testing

The purpose of the dojo was to do exploratory testing in pairs on a specific application (Pixlr Editor in this case) in a tightly time-boxed manner (5 minutes per tester). The testing was performed with one person doing the actual testing and another person writing down notes on the findings, what's been tested and what could be tested further in the future. The person performing the testing was also responsible for walking the audience through with what (s)he was testing and why. At first, people in the audience made suggestions on what to test but it was then agreed that the audience should only ask questions and not interfere otherwise unless the tester pair got stuck on something or ran out of ideas on what to test next.

We had simple, varying missions to perform like:
- open a specific picture and try to make it look like a printed target picture
- test the filters, see if they behave consistently and if they could be abused somehow
- test the history, see how many actions it will record and if the actions are recorded consistently

After each 5-minute testing session we had a short debriefing on what was tested and what did we learn which, in some cases, gave us some good ideas on what the next person could start the testing with.

Recap

At the end of the 3-hour dojo we were handed colored post-it notes to write down notes on what was bad, what was good, what did we learn and if something could be improved in the future. Personally, I was absolutely delighted to see people were so enthusiastic about the concept and it showed in the notes as well; there were a lot of good comments on what people learned and how things could be improved. The open conversation and sauna afterwards were really nice too!

This being the first time for a lot of people (myself included) performing this kind of testing exercises there was a bit of disorder in the beginning but in my opinion we got things under control pretty nicely (like forbidding the audience from interfering, which was found to be distracting by many of the participants) and I'm positive we'll do much better in the next testing dojo.

What's next?

Talking about the next time, we agreed with Tiina that I will be organizing the next testing dojo with Tiina's help, and it will be organized in Uoma premises in Punavuori (Merimiehenkatu 36D, 3rd floor to be exact). The date for the next dojo is still open, but I would expect it to be sometime mid-December or so. We'll discuss the exact details with Tiina next week and I will post an update to the blog once we've made some decisions.

2Aug/100

Because You’re Worth It…

This post should probably have been named as "Brothers in Arms, Part 3" as it was inspired by the blog post Two "Scoops" of "Bugs" by James Bach but I felt like doing things a little differently this time. Just to keep things fresh. If that's even a valid expression as most of this post is about something I wrote back in 2001 (I seem to have been rather productive back then).

As comments to what James Bach wrote, an obvious problem with the loose use of language is the communication gaps that it opens up between individuals. Simply put: more room for interpretation, higher chance of misunderstandings (pretty obvious, eh). Then again, as he mentioned in his post, it also makes communication smoother when you don't drill down to exact specifics of every little nuance and detail right away as going too deep too fast could lead to losing the big picture which would not be very likely to be a good thing. Balancing these two (high-level communication versus gory, but necessary, details) is an art form on its own, imo - a tester needs to speak manager in addition to tester, with maybe a little added marketing accent, to get things straight.

Anyway, a little something from the past:

There can be no single logical mappings between words and concepts - not even within a single language - simply because words are nothing more than words; generally-agreed symbols one after another - creating a generally-agreed "meaningful" combination. In actuality, words are just empty placeholders or pointers, if you want to put it that way - without any intrinsic or implicit content or value. It is us humans, as unique individuals, who fill those placeholders/pointers with subjective meanings that are unique to each and everyone and are entirely based on emotions.

Since there are and can never be two individuals exactly alike, there is no way a single word could have the exact same content for any two persons, unless explicitly agreed on, between those individuals. Thus, dictionary words aren't "true" or "right", they are just meanings intersubjectively agreed on by a group of people. Then again, the purpose of a dictionary is not to enforce meanings to words but, to explain - reflect, if you want - generally agreed meanings of those words at the time.

Media sells products with advertisements filled with images "giving meaning" to those empty words (words criticized by Richard Rorty) as if those words in themselves meant something and, you had to buy the product in order to become what the images are trying to associate you with. "Buy product X and you will become beautiful. Naturally, because you're worth it". See how it goes? Is it really me who is worth the product? Is it really owning the product that makes me worth something? At least, that is what many advertisements want to imply. What these advertisements also imply (but leave unsaid) is the image of what happens if you don't buy the product advertised; from that viewpoint the advertisement reads: Not buying this product makes you less worthy, less beautiful and less desirable.

No wonder people are confused and anxious.

Come to think of it, the Most Questionable Title for a Profession that I can think of would go somewhere along the lines of: Marketing Psychologist. O' fear ye mortal ones.

While, again, this is not directly testing-related, it's very strongly people-related, which does make it relevant. The single biggest reason for failed software projects in my experience is the lack of proper communication. One way or another. Which makes things interesting as one of my goals in testing is to try and help close down those communication gaps by pointing them out to people. Is that quality assistance?

9Jul/102

Brothers in Arms, Part 2

Of knowing something... and understanding it

Shorter post this time, but I just felt compelled to share a little something that I came across earlier today. The second part in this series was inspired by Zeger van Hese's post Feynman on Naming. I really like this post. Not only because I agree with Feynman's father's methods, the points both van Hese and Feynman make but also because it strongly reminded me of a write-up named "From books people learn to remember, from mistakes to understand" I wrote back in 2001. To quote the write-up:

Humans have a vivid imagination, with the capability of visualizing practically anything within the limits of human experience. Yet, imagination and visualization are not the same as true understanding; one can not imagine experience.

What this means is simple: you may know exactly (neurologically and physiologically) what happens when a person sticks his/her hand on a hot oven plate but you do not understand what it really is, until you have done it yourself. If you have, it is possible to recall what it actually felt like and how you perceived the sensation, in addition to knowing everything there is to know about the event. Your understanding has grown deeper than anybody's who hasn't made the same mistake themself.

While it may be intelligent not to stick your own hand on a hot oven plate, after seeing someone else doing it, in a way it is unwise: you will be depriving yourself of an experience. The same applies to just about everything; sex, drugs, skydiving, you name it. This is where "common sense" steps in the ring - some experiences are best left unexperienced. Personally, I wish I will never have to go through - for example - any of the following: death of a child or wife, sexual abuse, long-term imprisonment, drug overdose, psychosis of any kind, becoming paralyzed etc. Some I can prevent with my own actions and decisions, some I have no power over.

To quote "Good Will Hunting" - one of my all time top-10 movies:

"So if I asked you about art, you'd probably give me the skinny on every art book ever written. Michelangelo, you know a lot about him. Life's work, political aspirations, him and the pope, sexual orientations, the whole works, right? But I'll bet you can't tell me what it smells like in the Sistine Chapel. You've never actually stood there and looked up at that beautiful ceiling; seen that."

This is exactly what I mean with the difference between knowing and understanding.

This, in turn, reminded me of another text I wrote, also in 2001:

Furthermore, so what if some scientist, poet or philosopher had already come up with an idea 100, 1000 or, 3000 years ago, that you came up with right now? Ideas aren't inherited, they don't transfer to children in genes and make their lives better. No, those children will have to gain an understanding of their own and, in that, some long-dead philosopher's profound ideas or incredibly deep understanding of the world, humans or life will have no effect. I'm not saying people couldn't learn from other people's ideas, no. What I'm saying is that reading those ideas does not equal understanding them - that has to come from inside yourself - the book or the long-dead philosopher can't do it for you. Thus, it's critically important to come up with ideas, in order to grow, develop and mature.

While I know this isn't directly testing-related, it's trivially easy to apply there as well. Just like van Hese wrote in his blog. I wish more people would do that - strive to understand, instead of just memorizing!

6Jul/109

Brothers in Arms, Part 1

The art of making enemies?

I'm pretty sure it's not exactly the best approach around to start your life in the blogosphere (or anywhere, for that matter) by criticizing people, but hey, this blog is about protesting against poor software quality and quality always starts with the people involved (feel free to contradict me on that, I'd love to see someone explain to me why I'm wrong) - their attitudes included - so if a big part of the problem is in the attitudes and mental approaches of the people instead of the methods or techniques used, then that's what I will be protesting against today.

Starting the series...

As stated before, I will be posting a lot about my thoughts and views on what my colleagues in the testing world write in their own blogs and webpages or what I've been discussing with them through one medium or another.

The "honor" for the first post in this series goes to Rob Lambert for his excellent post Don't be a follower, be a tester, which woke a multitude of emotions and thoughts in me. Actually, the post irritated me. Not because I would disagree, but because he's so damn right in what he says. Briefly put: the irritating essence of the post is that the testing world is full of sheep bleating trendy catchphrases that make things sound really nice while actually meaning absolutely nothing (a favorite subject of criticism by Richard Rorty).

What really caught my attention were these three conclusions Rob made, based on the comments to one of his earlier posts:

We should not challenge the best practices of testing
We should not challenge the experts in testing
We should not talk about testing publicly
unless we are an expert or we know the experts.

My immediate first thought, after reading this was: What, exactly, constitutes an expert in testing? How does one become an expert in testing? Added bonus question: How, exactly, would knowing an expert in testing lend more credibility to what I am saying? I'm not the expert in question, am I?

So, how does this becoming-an-expert-in-testing happen? by not challenging the current authority figures? By not trying to come up with something new and/or improved? By not tackling the problems you see and just serenely accepting them (*baah*, said the sheep and continued munching on grass) as a necessary - and unchangeable - part of the world you're living in?

Let me answer that for you: It happens by stirring the hornet's nest, by challenging the consensus. It happens by telling people that they're doing things in an ineffective way, when they are, and then proposing an alternate way. It happens by not just accepting everything that's being force-fed to you without questioning. It happens by making mistakes, learning from them and then coming back stronger and better equipped than before. It happens by acknowledging the fact your thoughts and ideas could be shot down and torn to shreds in public, and still having the courage to present them despite the risk.

In brief: One does not become an expert (in testing) by being an idle spectator. You do it by taking a stand and making it your very own, personal, responsibility to change the way things are done. And that sure as hell won't happen if you don't throw your ideas out there for people to see and have a taste of. Of course, this is just my personal view - your mileage may vary.

Dramatic, I know, but sometimes being an opinionated tester is like the IT-world equivalent of being John Rambo...

As a sidenote: seems I just discovered a defect in WordPress while writing this post. 😛
Sidenote #2: seems I just discovered another defect in WordPress when trying to add more tags to this post (sorry Rob, WordPress refuses to store your name as a tag with Capital First Letters, I'm not trying to be disrespectful here...)