Does Testing Add Value - Test Driven Automation Tool
16906
post-template-default,single,single-post,postid-16906,single-format-standard,ajax_fade,page_not_loaded,,qode-title-hidden,qode-theme-ver-7.8,wpb-js-composer js-comp-ver-4.12,vc_responsive

11 Jul Does Testing Add Value

My simple answer: “No, it does not.”

Ok, before you get your nickers in a twist, I am not saying you should get rid of the Q/A team or ship a product that’s hanging together by dental floss. On the contrary, we should always ship a product of the highest quality. But despite that, I maintain that testing, in and of itself, isn’t value-added.

Ok, so now you’re thinking that the old guy’s lost it. To understand what I mean, understand the definition of value:  something that is a product or service that the customer is willing to pay for. In the end, customers want a high-quality product. But they don’t necessarily want to pay for large amounts of testing, reporting, replicating, root cause analysis, fixing, verification and regression and the bureaucratic process that goes into EACH bug.

So how do you develop software that’s of the highest quality with the most value to the customer? Let’s look at two different ways to develop a product:

  1. Big bang approach: Spend a lot of time coding. Ignore defects along the way. Address them later in the ‘Testing’ phase. Add major risk to the release.
  2. Small batch approach: Deliver small product slices and test as you go, as close to daily as possible.
In the first approach, defects pile up and get more complex. Worse, they get buried deeper each day. Developers spend most of their time finding, triaging and fixing them, adding significant time and cost into the product plan. I’m not even going to mention the inefficiencies of bouncing the defect between the Q/A group and development groups until the fix is verified and regressed. Yeah, let’s just say I didn’t mention that. It’s ultimately the customers who pay for this unnecessary rework. Why unnecessary? Because a good amount of the time and cost to find and fix these defects could have been avoided.

Alternatively, savvy organizations adopt practices to build integrity into the product. In fact, these organizations develop test cases first and use those as their requirements. Moreover, those test cases are objectively measurable, minimizing any ambiguity in the requirements. In essence, the developers allow the testing to drive development, not the other way around. This, of course, is the philosophy of test-driven development (TDD) but on a much larger scale than simply unit-testing.  The idea here is to build small, high-quality slices of the product and then integrate them in as soon as possible.  Because each slice is miniscule, new defects will most likely be the result of the latest addition. The smaller the slice, the faster its defects can be found and resolved.

The first approach takes weeks or months. The second approach exposes problems in hours, or, at the most, in a couple of days allowing us to spend customers’ money on creating more high-quality features, not on finding hidden defects. Of course, even this approach isn’t a 100% bug-free but our experiences have shown defects to drop as much as 80% in a single development life-cycle.  That level of achievement isn’t the normal case, but it can, and does happen.

With all this said, is testing valuable to the customer? Still no. Is it necessary? Absolutely. But the goal is to minimize the effort it demands. How? Use industry best-practices to build integrity in (unit testing, etc.). Build and integrate tiny slices and test as you go. Always employ a WIP Cap.

These practices will drastically reduce your delivery cycle time. More efficiency means more time on creating more high-quality features. And THAT’s more value to customers.

No Comments

Sorry, the comment form is closed at this time.