This was my first guest post here, for the KnowledgeTester blog. It was published on July 4, 2013. I am re-posting it here for keeping a history of my online presence. Here is a copy of the full post from Knowledge Tester's site.
In a recent blog post by Majd, where he mentioned the time spent on non-testing activities to be time wasted, caused some stir within the blog followers. This lead him to write another post about the same topic, but later post looked at the non-testing activities as “supporting activities”.
After going through his posts and the response he received from the knowledgeable readers, I found this whole discussion very much contextual. I am still wondering if there exists an absolute list of activities, where we can draw a line between valuable or wasted testing activities? In different teams and different test cultures, testing activities are perceived very differently. It heavily depends on the organizational culture, test culture and most importantly it depends on the value a test team adds to the overall product development life cycle.
Having first-hand experience of working with a very fast paced test team (in an Agile based development environment), and later becoming part of a test team following a traditional software development model, heavily dependent on a through test documentation, I strongly feel that the concept of time wasted during testing really depends on the environment in which testing is being done.
In the former case, it seems like that the main focus of a test team essentially revolves around making sure that an application is quickly and smartly tested, activities like creating virtual machines, generating test data, writing and maintaining test plans or even reporting and verifying defects (as Majd has also mentioned) seem to take a heavy toll on the perceived value of testing. In an environment like this, fastest and correct delivery of information and shortest feedback loop plays an important role in a team’s success. Test teams take a great pride in sharing the numbers like “how quickly their smoke test plan was executed and how briskly a build’s health status report was shared with the stakeholders”. In such a team, the term sooner the better holds a real value. Keeping that in view, the context in which Majd has pointed out the “non-testing” activities could be considered as a time wasted in performing the overall testing job, because it apparently has no perceived value in meeting the “need for speed”.
On the other hand, a traditional and documentation based development environment focuses more on the accuracy, completeness and coverage of the test specification to ensure that the requirements are correctly perceived and met, hence in this context the discussion about non-testing activities takes a completely different meaning. Writing better documents, (sometimes perfect documents) can take a test team’s significant time and effort. It seems like that such a test team is more likely to spend time on the activities like writing and maintaining documents, scheduling document reviews and waiting for moderation and approval. Test build arrival can take months; hence, there is no sense of urgency in terms of updating the stakeholders about a build’s health status, until and unless a complete test round is finished (this also depends on how many testing layers a team has in place to ensure a short feedback loop). The completeness and accuracy of testing is heavily dependent on perfect test documents. In such a scenario, it is impossible not to spend time on writing good documents. Whenever, documentation is by-passed due to short of time, the test team cannot proceed further without a heavy churn and eventually suffers from a sense of guilt for not writing perfect test specifications which could ensure a full and accurate test coverage. So, for such a team documentation might come first and testing comes later. Having said that, this reminds of an experiment conducted at my previous company, where a university research student was conducting an academic experiment on the efficiency of exploratory vs. traditional testing approaches. Our test teams were asked to become part of his experiments, where the whole test group was divided into two teams. First test team was asked to run tests after writing test specifications using traditional testing approach, whereas the second test team was expected to perform exploratory testing by writing quick and brief test sessions. I was randomly put into the team, which was expected to perform the experiment using traditional testing approach i.e. writing detailed test cases before performing actual testing of the application. Since, I have always counted myself a true Agile tester spirit, hence becoming part of a traditional test team was disappointing for me, because I felt that my time will be wasted in writing tests and I won’t be able to find more bugs quickly like the other team who didn’t have to write detailed test documentation. As I expected, I spent most of my time struggling with writing test specifications, which in my eyes was an effort of lesser value and least significance. But now, as I work in a team where definition of valuable testing activities starts with good test documents, I feel less bad about writing test specifications. Hence, in my view this discussion is contextual. It really depends on the environment you are working in and the perceived value of the activities a tester is performing. As a reader, you may disagree to that! Just a reminder to myself, I still am a true Agile tester spirit. When I cannot find a way without writing documents, I try finding ways doing that quickly, smartly and efficiently. For me, value of speed and accuracy still matters! – :)
(* Huma is one of the few people who forced me to start out a blog on software testing. I had the pleasure of working with her in a team where we had lengthy discussion on the role of testing, being Agile, how to force the whole team to think about quality etc. etc.. Thanks to Huma for sharing her thoughts on this topic)
This was my first guest post here, for the KnowledgeTester blog. It was published on July 4, 2013. I am re-posting it here for keeping a history of my online presence. Here is a copy of the full post from Knowledge Tester's site.
In a recent blog post by Majd, where he mentioned the time spent on non-testing activities to be time wasted, caused some stir within the blog followers. This lead him to write another post about the same topic, but later post looked at the non-testing activities as “supporting activities”.
After going through his posts and the response he received from the knowledgeable readers, I found this whole discussion very much contextual. I am still wondering if there exists an absolute list of activities, where we can draw a line between valuable or wasted testing activities? In different teams and different test cultures, testing activities are perceived very differently. It heavily depends on the organizational culture, test culture and most importantly it depends on the value a test team adds to the overall product development life cycle.
Having first-hand experience of working with a very fast paced test team (in an Agile based development environment), and later becoming part of a test team following a traditional software development model, heavily dependent on a through test documentation, I strongly feel that the concept of time wasted during testing really depends on the environment in which testing is being done.
In the former case, it seems like that the main focus of a test team essentially revolves around making sure that an application is quickly and smartly tested, activities like creating virtual machines, generating test data, writing and maintaining test plans or even reporting and verifying defects (as Majd has also mentioned) seem to take a heavy toll on the perceived value of testing. In an environment like this, fastest and correct delivery of information and shortest feedback loop plays an important role in a team’s success. Test teams take a great pride in sharing the numbers like “how quickly their smoke test plan was executed and how briskly a build’s health status report was shared with the stakeholders”. In such a team, the term sooner the better holds a real value. Keeping that in view, the context in which Majd has pointed out the “non-testing” activities could be considered as a time wasted in performing the overall testing job, because it apparently has no perceived value in meeting the “need for speed”.
On the other hand, a traditional and documentation based development environment focuses more on the accuracy, completeness and coverage of the test specification to ensure that the requirements are correctly perceived and met, hence in this context the discussion about non-testing activities takes a completely different meaning. Writing better documents, (sometimes perfect documents) can take a test team’s significant time and effort. It seems like that such a test team is more likely to spend time on the activities like writing and maintaining documents, scheduling document reviews and waiting for moderation and approval. Test build arrival can take months; hence, there is no sense of urgency in terms of updating the stakeholders about a build’s health status, until and unless a complete test round is finished (this also depends on how many testing layers a team has in place to ensure a short feedback loop). The completeness and accuracy of testing is heavily dependent on perfect test documents. In such a scenario, it is impossible not to spend time on writing good documents. Whenever, documentation is by-passed due to short of time, the test team cannot proceed further without a heavy churn and eventually suffers from a sense of guilt for not writing perfect test specifications which could ensure a full and accurate test coverage. So, for such a team documentation might come first and testing comes later. Having said that, this reminds of an experiment conducted at my previous company, where a university research student was conducting an academic experiment on the efficiency of exploratory vs. traditional testing approaches. Our test teams were asked to become part of his experiments, where the whole test group was divided into two teams. First test team was asked to run tests after writing test specifications using traditional testing approach, whereas the second test team was expected to perform exploratory testing by writing quick and brief test sessions. I was randomly put into the team, which was expected to perform the experiment using traditional testing approach i.e. writing detailed test cases before performing actual testing of the application. Since, I have always counted myself a true Agile tester spirit, hence becoming part of a traditional test team was disappointing for me, because I felt that my time will be wasted in writing tests and I won’t be able to find more bugs quickly like the other team who didn’t have to write detailed test documentation. As I expected, I spent most of my time struggling with writing test specifications, which in my eyes was an effort of lesser value and least significance. But now, as I work in a team where definition of valuable testing activities starts with good test documents, I feel less bad about writing test specifications. Hence, in my view this discussion is contextual. It really depends on the environment you are working in and the perceived value of the activities a tester is performing. As a reader, you may disagree to that! Just a reminder to myself, I still am a true Agile tester spirit. When I cannot find a way without writing documents, I try finding ways doing that quickly, smartly and efficiently. For me, value of speed and accuracy still matters! – :)
(* Huma is one of the few people who forced me to start out a blog on software testing. I had the pleasure of working with her in a team where we had lengthy discussion on the role of testing, being Agile, how to force the whole team to think about quality etc. etc.. Thanks to Huma for sharing her thoughts on this topic)
0 comments
Post a Comment