New Articles — and an announcement

I just got back from interop, the conference for emerging technologies that connect business – from data center to cloud to switches, servers, and software.

It was an incredibly busy week.  You might not have been there, but the “dessert first” part of the story is all of the articles that came out of the trip, with more to come.  In no particular order, here they are:

* Considering Conferences II, my trip report

* Beyond Booth Babes (Please read it before yelling at me on the intarwebs, ok?)

* Seven Ways to Find defects Before They Hit Production, on Informit.com

* Using Pair Programming in Code Inspections on CIO.com

* Lumbering Toward Platform as a Service, a new blog post for 21st Century IT

* Serious Acceptance Checking, for TechWell.com; there is also a Stickyminds version

… and now for the heavy lifting

The Serious Stuff

For over a year now I’ve been slowly working on a publication “The best writing in software Testing”, and I slowly come to a problem.

You see, I would like to produce a text that provides a broad coverage of the test literature.  The kind of thing that, once a new tester finishes, they would have a meaningful introduction to the field, and would be familiar with the key challenges and controversies in the field, along with some proposes solutions and pros and cons.

And there’s a problem.

I also belong to the Context-Driven School of Software Testing, which makes the claim that there are no best practices.  Instead, practices are better or worse in a given context.

Now if you believe in “best practices”, then giving advice is easy.  You say “just do it my way”, provide a prescription, and walk away.  It may be  a bad prescription, one you offer without examining the patient, but at least it is possible.

In the world of context-driven, we argue that giving a prescription without first examining the patient is not best practice, but malpractice.

Trying to get advice out of us can be very frustrating.  You just want to know how to test this application that’s supposed to go live in a week, and we keep saying “it depends!” or “tell me about it.  What problem are you trying to solve?”

I get it, I really do.  Your manager is breathing down your neck, you just want to test the #@$# thing, you ask what you think is a straightforward question, and the answer you get back is … more questions.

Excuse me, what?

Please allow me to be critical of our school for a moment.

I have to admit, sometimes I feel bad for speakers from other schools when facing the context-driven Juggernaut.  The other person can bring a specific, reasoned, logical method, one based on experience and skill, only to be met by a context-driven tester who replies with something like “That would never work for an avionics system!” or “I wouldn’t let that drive my car!” or whatever odd, single instance we can think of where the idea doesn’t hold water.

Of course, no one in the room at the time is doing any of those things, but the context-driven tester gets to strut around as the hero who came up with the “Well, Actually” (Warn: Some mild language), while the guy who actually stuck his neck out with a real idea slinks back to a corner, hat in hand.

Do you think he’s going to propose another idea anytime soon?

To be charitable toward our school, when testers say that, it is usually out of a real concern for the audience. We know all too well that the transference rate for ideas are shockingly low.  People fail to understand the nuance, and information gets lost. I get why they raise the objection, and it makes sense.

But there’s a problem.

The Context-Driven Literature

For lack of a better term, I refer to the books, papers, blogs and articles published in our school as the Context-Driven Literature.  Many of the things I would like to publish come from this literature, or were influenced by it in some way.

When I look for specific, concrete ideas to give to new testers to get started, I am not finding much within our school.  There is a great deal of “it depends”, “use good judgement”, and lists of possible jumping off places — not much in terms of repeatable exercises.  We do intend to offer an editor’s introduction to each chapter, along with “what I have learned since” by the author, so we can take a “practices” article and explain when and why it might not be a fit, but I would like some practices articles.

Right now I would like a chapter on quick attacks and one on where test ideas come from.  So far, I have myself down for both of those – Ten Quick Attacks For Web-Based Software and  Seven Ways To Find Defects Before They Hit Production.

As you can probably guess, I’m not excited about using my own work.  It just looks bad.  Beyond that, I’m looking for articles of all shapes and sizes, from 1,000 words to perhaps 7,000, but especially  good articles on exploratory testing, test driven development (developer-testing), dealing with time and schedule pressure, communicating with stakeholders, load and performance testing, something on security testing, usability or interaction design, the impossibility of complete testing, and whatever else you think is important.

But I’d like something that takes a context-driven approach yet tutorial in nature – where the person walks away with concrete ideas.

It’s tough.  I see this very problem when I give conference speeches; people looking for a simplistic process or method they can “plug in” are sometimes disappointed by my talks, which lean toward tips, tricks, and shattering illusions.  When I do a talk that is tutorial in nature, I tend to get inverted comments; the senior folks are disappointed that the ideas were so simplistic.

It’s tough.

Yet we have a large body of literature out there.  Once you count blog posts, books, and on-line resources, we have thousands of good articles to draw from, not to mention 100x as many bad ones.

Will you help me find some of the best writing in software testing?

If you do a few minutes of research and find something (or something comes to mind immediately), please leave a comment or email me: matt@xndev.com.  If it’s a fit for the book, we’ll thank you on the acknowledgements.  If not, well, hey, the time will be well spent anyway, seeking the good in advice about software testing.

I’d say that’s a win, and I hope you agree.

3 comments on “New Articles — and an announcement

  1. Matt,

    I hear you about the “it depends” point. Don’t get me wrong though. As you know, I believe many of the best thinkers, writers, presenters in the software testing field are in the Context Driven Testing school.

    Having said that, check out Michael Bolton’s post about Stopping Heuristics. http://www.developsense.com/blog/2009/09/when-do-we-stop-test/ In my opinion, it is a great example of exactly the kind of Context Driven writing you’re searching for. The article (not surprisingly) refuses to provide a pat “do this” / “one size fits all” answer but (rather than providing a simple “it depends” answer), the article provides enough actionable guidance for testers in a wide variety of different contexts to apply the guidance and insights to their own situations.

    – Justin

Leave a Reply

Your email address will not be published. Required fields are marked *