Why we need ALM: industry's dangerous flirtation with software quality
Yesterday I spent the evening at a dinner in London as a stand-in for my colleague Bola Rotibi, talking about Application Lifecycle Management (ALM) and governing software delivery to a group of around 20 senior IT leaders. Due to my own disorganisation I'd not realised that this was a stand-up talk with no visuals or projection, and I'd created a slide deck for the talk. So, on the train to the event, I was frantically reverse-engineering my slides into a set of brief stand-up notes. Although it was a pain, it ended up being fortuitous because the act of having to reinterpret my slides made me see a couple of points worth making that I hadn't spotted before.
The point I was making is that today's industry interest in ALM represents an interest in delivering high-quality outcomes from software delivery work, but that's hardly a new thing. However the need for ALM today is made more pressing by industry's historic failure to consistently address quality as a concern. It's what I called "industry's dangerous flirtation with software quality" - kind of a perpetual "get away, come closer" posture that has by and large failed us. Although there's now 50+ years of racial memory somewhere out there in industry about why software quality is important and how to achieve it, the problem is that fundamentally, the IT industry is a fashion industry - and each new fashion wave brings a new set of devotees, few of whom are particularly interested in taking notice of what the devotees of previous waves learned.
Let's look at a picture.
What I'm trying to show here is how disruptions in technology platforms and architecture patterns typically lead to the baby being thrown out with the bathwater. As any given approach starts to have mainstream applications and matures, the importance of quality becomes more visible. Then, though, a new platform arrives and we start all over again. Think about how, in the client-server era, we started with hacking in PowerBuilder and VB; then, "second-generation" client-server tools took more CASE-like approaches and helped organisations deliver more scalable, robust apps more quickly. Then came the web, and it seemed that we suffered from a mass "memory wipe" before grabbing hold of the nearest Java IDE and hacking again.
We've been through this cycle at least three times: from mainframe to client-server; from client-server to first-generation web; and from first-generation web (simple consolidated server deployment; simple web-based client deployment) to where we are now (I'm desperately trying to avoid typing the web-two-dot-oh thing, but I'm referring to web-based services with multi-channel front ends, mashups in the mix, back-end web-based integration, etc).
Of course, so far I've really said nothing you probably hadn't thought about already. But what also struck me yesterday was how each "turn" around the cycle has added more complications to the process of software delivery. There are three parts to this.
Firstly, consider that each time we've turned through the cycle, the overall IT environment has become more complicated. This has happened in two ways. Each new platform/architecture has brought more distribution, federation (moving parts) to the equation; and nothing ever dies - mainframes, client-server systems, and first-generation web systems still abound. They're part of the operational and integration environment.
Secondly, each time we've turned through the cycle, there's been a decreasing scarcity of "hard" resource - that meant that we naturally had less innate desire to control effort and quality than previously. Back in the early days of mainframe development, CPU cycles were expensive and access to those cycles was exclusive; it was absolutely obvious that the cost of the assets employed was so high that you had to get things right first time.
Today, for the price of a sandwich, I can get some tools, rent some server capacity, and build and deploy an application that might end up playing at least a bit part in the way a business works.
The kicker is that although the cost of "doing stuff" is rapidly tending towards zero, the cost of software failure is at least as high as it's always been - but the tendency in industry to perpetuate the artificial "wall" between software development and IT operations means that we can easily forget about the cost of failure - and the overall risk to software delivery outcomes - until it's too late.
Thirdly, each time we've turned through the cycle, the distinction between "software" and "service" has become more and more blurred, as business services have come to depend increasingly on software automation internally, and be delivered to consumers through software-based interfaces externally.
These three factors all point to the desperate need for organisations to be able to better link activities across the whole of the software delivery lifecycle - from upstream activities like portfolio management, demand management and change management right through development, test and build all the way downstream to IT operations.
We need to turn software delivery into a business-driven service - and that means ensuring that business priorities are reflected in
*what* work gets done; ensuring that business priorities are reflected in
*how* work gets done; and ensuring that individual projects are carried out in the context of a "big picture" of business service delivery. That's what ALM is all about.
If you'd like to read more about Bola's thoughts on this subject, check out
The dilemma of "good enough" in software quality.
Labels: ALM, architecture, development, Software Quality