Wednesday, November 11, 2009

Commitments and Aspirations

Here I am, setting my commitments for the year two days before my maternity leave. My mind is drawing blanks..what can I promise at a time like this? I can't even think of what I have done in the last couple of months to quantify good work. Not the best of feelings.
At the same time, there are so many aspirations that I have. The problem is that even though they would all help in my project, I cannot put them down in my commitments because I would not be there for the next 4 months to deliver on them. I still want to make a list of things I will try and find time for the next few months , I am calling them aspirations and not expectations from self because I seriously cannot predict the madness of  the next few months.
So here they are:
1. I'll be attending a workshop by Michael Bolton on Monday(first day of my leave). I hope to put the things I learn into practive by doing some open-source testing.
2. Want to learn how to test using the object model and the Iaccessible interface. I need to learn the theory before I can implement them on something. Maybe my previous project.
3. Want to learn about testability and test patterns and apply them to a sample app. Come up with a good set of recommendations about both.
4. Practice exploratory testing. Participate in the weekend testing group wherever possible.

All this while I take care of the new born and also give time to my toddler. Wish me luck. I have four months before I return to work and I know I'll never find time for these things once I am back at work.

Tuesday, October 13, 2009

How do you get better test coverage?

There are two things we use to find out whether our test design has coverage. The first is ensuring the the specification is fully covered and the second is code coverage. Even though all our testing is not done using the test design and our coverage is much higher that these numbers in reality, these numbers are still important for us because these represent what are the repetitive tests, documented, handed down to serviciability and in use for years after the product is released.
While the spec coverage is easy, as long long as you can trace each use case to a test case, you  know you are good. But often you would realise that this spec coverage does not necessarily lead to a good code coverage. For code coverage, we often work in a reactive mode. If our code coverage is not as per our acceptance criteria, we look at the code and see what we've missed, come up with tests for what we have missed. While this does give us adequate coverage in the end, there still seems to be something missing in our test design.
A better approach would be a hibrid one to our test design. We should not look only at the requirement specification to come up with our initial test design but at the dev design too. Our testing should be phased/scoped. While unit tests remain primarily a dev responsibility, our test design should not be completely agnostic to the dev design or code. While the dev design would help us come up with some responsibility based test cases at class/subsystem/integration level. The dev code would help up come up with implementation based test cases, which are necessary for certain type of bugs, example exceptions. We cannot really provide coverage for the errors our code will produce unless we know how it is implemented. The dev design on the other hand will help us come with testcases which will provide better coverage. Its important for us as testers to understand how the various components of the product integrate, what is the responsibility for each of them. If we include this kind of testing early one in our test cycle, we would have a better grip on coverage and not only that, this would gaurantee that we find some bugs earlier and closer to their introduction. This would reduce the costs of fixing these bugs too. While application level testing is important to ensure that our users are getting what has been promise, the component level testing would mean less surprises in the end.
One more side effect of this exercise would be more meaningful reviews of the dev design by the test team. Do we as testers care only about the implemented feature or should we care about what is the design in implementing the feature. The design does affect the quality of the shipped product, especially when the requirements start changing. A good design ensures we can fit the changing requirements better.
Next time we start out to design tests for a new product, lets keep these things in our minds. As testers one of our jobs is to collect information about the quality of the product, lets own this responsibility more completely by getting involved at all aspects of the project better than before.

Tuesday, September 29, 2009

Specification based testing - Second of my approaches to testing

                “Bug prevention is testing’s first goal.” – B. Beizer

One of the best practices in our team is the fact that both the devs and the tests  review the functional specification before it is accepted. As a test this is the time to make them most of. One of the practices I have learnt from Robert Binders, Testing Object Oriented Systems is analyzing the requirements while you read them. He recommends a pattern called Extended use-cases. If you rewrite the requirements in this way, trying to fill in different scenarios/inputs, you can detect ambiguities in the functional specification better than if you just read through the specification.
The functional specification is the basis for the system tests. Because you can start work on these as soon as the functional specification is out, you have your system test strategy ready even before the developers start churning out features. While you use the system tests the last, you write them the first. As features become available, you can start running some of them.

Sunday, September 27, 2009

Different approaches I take to take to testing a product - Part 1 Ad-hoc

As part of testing a product, I take various different approaches for doing the same task. The approach depends on the stage of the product, type of product, time at hand amongst other factors. I thought I'll pen down these approaches along with the reason I use the particular approach and its advantages/disadvantages.


The first kind of testing I ever did was ad-hoc testing. I joined my company with no prior experience/knowledge of testing. My manage asked me to just play with the product and try and find out as much as I could about the product by just trying out different features. He asked me to simply explore the product and write down any questions I might have and also anything that in my opinion was a bug. As a novice, my approach was purely ad-hoc.The only thing that helps me with this approach was my curiosity about the product.I tried the product with a notebook and pencil at hand. I would write the features I could not figure out and things which seemed intuitively wrong.


Over the years, this ad-hoc testing has become a part of my routine and something I try to do on a regular basis with the products I am working on or some other products that my team owns. With time I have realised that if you focus on a particular area, like UI or internationalization, when you are testing in this manner, it increases your chances of finding bugs.  The idea is that while you focus on a particular area, you don't start out with plan. You just try to do different things with the product, some of them do turn out to be far stretched and a fragment of your imagination but that just adds to the fun. The more twisted the thing you try to achieve with the product, the more fun you have. Thats the basic idea about my adhoc testing, fun. It acts as a break from whatever else I was doing/trying to do. And a one hour of just playing with the product, gives more than enough insight and issues to justify the time I spend on it.


I have found more bugs with this ad-hoc testing approach that with the other planned approaches I have used. This has helped me know the health of a new feature faster than going through the test cases and also find test holes during the later part of the product cycle.


I guess this approach is similar to exploratory way of testing, though I don't know enough about exploratory testing to confirm it. Try it out if you don't already do it. Its fun and productive.

Monday, September 21, 2009

Difficult to be a customer - My first lesson from Bangalore weekend testing

Last Sunday, I had a chance to interact with a group, which picks up an app and tests it for a couple of hours. This group is called the Bangalore weekend testing group. For me this was a brand new experience and something I had never done before. In one hour, I had to install, learn, use and find bugs in an app I had never heard of before. Yes, it had a functionality I could actually use in my life so I could have been a customer for that produce.I realised how difficult it is to be a customer and use a product for a very first time.
As testers we have access to so many sources, we have the functional specs, we have analysis of similar products, etc. This forms our basic expectations from the product and we test based on that. Sure we question the spec where it seems against some basic notion to us or we suggest different feature where we 'think' the customer might benefit. But the fact remains that at each point when we are doing that, we already know our product pretty well. While when a customer gets a product, she knows nothing about the product. Either the functionality should be available easily through the UI or she would have to read the entire help. I also realised when you are itching to use a product, you don't want to read the Help. I expected, when I was testing as a customer, that the basic functionality should be apparent through the UI. Sure if I want to do something extra, I would go read the Help. But I should not have to do it as step 1.
As a tester, we should be learn to be a customer, a novice. Forget everything we know about the product and just use it as if we are seeing it for the first time. This would help us give our customers a better product.

Friday, September 18, 2009

What Testing means to me

Testing to me means finding out what the product should be like better than the designer, knowing what the product does better than the implementers and questioning both the designer, the implementer and the product with the curiosity of a 5 year old.


Being a tester means being able to wear different kind of hats. Wear the customer hat to understand what the product should do, Wearing the developer hats to see what the vulnerabilities in the code could be, wear a writers hat to write the perfect bug report , wear a negotiator hat to convince others why your bug should be fixed.