The dilemma of "good enough" in software quality
I recently finished some research on the cost and quality benefits of upfront, automated code and design review (i.e. prior to and during the development and build process) through static analysis. One of my conclusions was that the case for "good enough" is no longer good enough in delivering software. But some might argue that this viewpoint is promoting the notion of static analysis as a means of perfecting something - "gilding the lily" - rather than an essential tool for knowing whether something is "good enough".
Not so. In many fields, it is widely accepted that it is not necessary to apply a rigorous quality or testing schedule without first understanding what is at stake, what the risks might be in the event of failure and the severity of those risk. And from there deducing the level of, and appropriateness of, testing techniques, processes and tools to ensure that what is delivered is fit for purpose or "good enough".
In software delivery I would argue that too often the desire to deliver something quickly - especially to meet a deadline, however ambitious or unrealistic - overrides the key question of (fully determining) whether what is being delivered is actually "good enough".
The attitude of 'good enough' has been hijacked as an excuse for "sloppy" attitudes and processes and a "lets suck it and see" mentality. Surely such an attitude cannot continue to exist in delivering software that is increasingly underpinning business critical applications?
It is, you could say, a problem of managing expectations. A one star hotel probably offers adequate and "good enough" services for those on a budget. But this would not be sufficient for five-star luxury seekers. The key, though, is that customers of each know what they are getting for their money and whether it is fit for their purpose. There is a quantifiable means of grading what is delivered and matching that to what is expected.
Software technology advances allow varying levels of sophistication and capabilities and enable much more to be achieved. Software is increasingly becoming deep rooted in every aspect of our modern lives. New business models are being founded on applications and systems developed with many of these new technologies and approaches. If organisations start to restructure their working practices around applications and systems which rely on the new generation of communication and collaboration technologies and approaches, then failure due to poor code or application quality becomes even less acceptable.
The inclusion of rich media and visuals; the push for greater collaboration through the Internet; and unified communications for richer interactive social or work activities, means that any failure in such services would not only have the potential of creating higher levels of frustration - it would reduce productivity more sharply. On top of this, company brands become more easily exposed to damage.
Increasingly when people buy or use software they expect, if not 5-star performance, then performance and quality that is at least "good enough".
How many companies can rigorously look at their testing programs and say that is what they are delivering?
If you set out to deliver something that is important then applying the sloppy, suck-it-and-see "good enough" mentality just doesn't stand up.
What I find incredible after all this time - given the weight of evidence and eminent studies on the cost savings and the growing complexity and importance of software in our modern lives - is that the "sloppy" mentality and attitude still holds such sway in software delivery processes.
Many organisations don't spend nearly enough effort on improving the quality of the software they produce. More often than not they pay lip service to the concept whilst secretly holding the belief that it is a waste of resources (time, staff and money). As I stated at the start, the reality couldn't be further from the truth. In a future blog and report I will examine and discuss the evidence in more detail.
Clearly there is more to this debate: especially as we place a greater reliance on software and grapple with the growing pressure to release new features and functions more quickly both to differentiate in the market and to gauge users' reactions and acceptance.
Is it better to just get something out there that is of reasonable quality and worry about more deep-rooted bugs later, since it is a well known fact that software is never flawless?
This certainly is the option taken by many in the business of delivering software who have met with varying degrees of success and, if you are on the receiving end, - frustration, disappointment and loss.
In the end it boils down to one's interpretation of "good enough" and the answer to the intriguing questions: who and what should dictate what constitutes "good enough" software quality and how do you go about governing it?
If you are looking to answer these questions you will need to adopt a combination of strategies:
- You will need the support of good processes. Static analysis is certainly one way of improving the processes and being efficient with it. It is a tried and tested means for progressing software quality, predictability and lowering software costs. However, it is not the only tool for software quality. It is part of an arsenal of tools and processes needed for a wider and more lasting, cost effective approach to delivering quality software.
- You will need a clear vision, and an understanding of the goals (business and technical) to establish whether something is actually "good enough".
- You will need to put in place a connected tools and communication framework to underpin and support your governance process.
Labels: development, MWD, Software Quality, Static Analysis, Testing