Why we need ALM: industry's dangerous flirtation with software quality
Yesterday I spent the evening at a dinner in London as a stand-in for my colleague Bola Rotibi, talking about Application Lifecycle Management (ALM) and governing software delivery to a group of around 20 senior IT leaders. Due to my own disorganisation I'd not realised that this was a stand-up talk with no visuals or projection, and I'd created a slide deck for the talk. So, on the train to the event, I was frantically reverse-engineering my slides into a set of brief stand-up notes. Although it was a pain, it ended up being fortuitous because the act of having to reinterpret my slides made me see a couple of points worth making that I hadn't spotted before.
The point I was making is that today's industry interest in ALM represents an interest in delivering high-quality outcomes from software delivery work, but that's hardly a new thing. However the need for ALM today is made more pressing by industry's historic failure to consistently address quality as a concern. It's what I called "industry's dangerous flirtation with software quality" - kind of a perpetual "get away, come closer" posture that has by and large failed us. Although there's now 50+ years of racial memory somewhere out there in industry about why software quality is important and how to achieve it, the problem is that fundamentally, the IT industry is a fashion industry - and each new fashion wave brings a new set of devotees, few of whom are particularly interested in taking notice of what the devotees of previous waves learned.
Let's look at a picture.

What I'm trying to show here is how disruptions in technology platforms and architecture patterns typically lead to the baby being thrown out with the bathwater. As any given approach starts to have mainstream applications and matures, the importance of quality becomes more visible. Then, though, a new platform arrives and we start all over again. Think about how, in the client-server era, we started with hacking in PowerBuilder and VB; then, "second-generation" client-server tools took more CASE-like approaches and helped organisations deliver more scalable, robust apps more quickly. Then came the web, and it seemed that we suffered from a mass "memory wipe" before grabbing hold of the nearest Java IDE and hacking again.
We've been through this cycle at least three times: from mainframe to client-server; from client-server to first-generation web; and from first-generation web (simple consolidated server deployment; simple web-based client deployment) to where we are now (I'm desperately trying to avoid typing the web-two-dot-oh thing, but I'm referring to web-based services with multi-channel front ends, mashups in the mix, back-end web-based integration, etc).
Of course, so far I've really said nothing you probably hadn't thought about already. But what also struck me yesterday was how each "turn" around the cycle has added more complications to the process of software delivery. There are three parts to this.
Firstly, consider that each time we've turned through the cycle, the overall IT environment has become more complicated. This has happened in two ways. Each new platform/architecture has brought more distribution, federation (moving parts) to the equation; and nothing ever dies - mainframes, client-server systems, and first-generation web systems still abound. They're part of the operational and integration environment.
Secondly, each time we've turned through the cycle, there's been a decreasing scarcity of "hard" resource - that meant that we naturally had less innate desire to control effort and quality than previously. Back in the early days of mainframe development, CPU cycles were expensive and access to those cycles was exclusive; it was absolutely obvious that the cost of the assets employed was so high that you had to get things right first time.
Today, for the price of a sandwich, I can get some tools, rent some server capacity, and build and deploy an application that might end up playing at least a bit part in the way a business works.
The kicker is that although the cost of "doing stuff" is rapidly tending towards zero, the cost of software failure is at least as high as it's always been - but the tendency in industry to perpetuate the artificial "wall" between software development and IT operations means that we can easily forget about the cost of failure - and the overall risk to software delivery outcomes - until it's too late.
Thirdly, each time we've turned through the cycle, the distinction between "software" and "service" has become more and more blurred, as business services have come to depend increasingly on software automation internally, and be delivered to consumers through software-based interfaces externally.
These three factors all point to the desperate need for organisations to be able to better link activities across the whole of the software delivery lifecycle - from upstream activities like portfolio management, demand management and change management right through development, test and build all the way downstream to IT operations.
We need to turn software delivery into a business-driven service - and that means ensuring that business priorities are reflected in
*what* work gets done; ensuring that business priorities are reflected in
*how* work gets done; and ensuring that individual projects are carried out in the context of a "big picture" of business service delivery. That's what ALM is all about.
If you'd like to read more about Bola's thoughts on this subject, check out
The dilemma of "good enough" in software quality.
Labels: ALM, architecture, development, Software Quality
Software Delivery InFocus podcast - Developing in the cloud
This is the third in our Software Delivery InFocus series of podcast episodes, starring Bola Rotibi - the Principal Analyst of MWD's Software Delivery competency area. In this episode, she discusses the opportunities and challenges associated with using cloud-based software development services. Bola's guest is Debbie Ashton, Product Director for CODA - a provider of both on-premise and SaaS-hosted financial management applications.
Although there is a lot of hype surrounding the concept of "cloud computing", there also appears to be real value to be gained in some usage scenarios. The obvious financial benefit of renting software service (being able to remove up-front capital expenditure and instead account for software as an operating expense) is coupled with the scalability that's possible (you can pay as you go, and pay as you grow) and together it looks like cloud-based offerings will be especially attractive in the tougher economic climate that nearly all of us look likely to be experiencing for quite a while. Many organisations today are tempted to think only of the quick advantages of Cloud – partly as a result of the hype coming from the vendor community. However, whilst the potential and advantages are well documented and clear for people to see, the disadvantages or the challenges of use are not. In this podcast we look specifically at the challenges of developing applications for delivery from cloud-based software platforms. What practices if any should organisations take on board in developing applications and solutions using cloud-based development services? What processes and methods should organisations be putting into practice to get the most out of cloud development services?
CODA developed its SaaS-based offering, CODA2go, on Salesforce.com's Force.com platform - and in this podcast episode we hear what Debbie and her team learned about developing in the cloud along the way. Thanks to Debbie for some great insights. You can download the audio
here or alternatively you can
subscribe to the podcast feed to make sure you catch this and all future podcasts!
As with all the episodes in this podcast series, we've also published a companion report which summarises the discussion and "key takeaways". You can find it
here, and it's free to download for all MWD's Guest Pass research subscribers (
joining is free).
Labels: coda, development, podcast, SaaS, Salesforce
Are you capable of watching your technical debt?
Bob McIlree wrote an interesting blog at the start of October about watching the amount of 'technical debt'. The term is one that I had not previously encountered. Ward Cunningham first defined it back in 1992 and Martin Fowler provided an additional and perhaps slightly clearer perspective in 2003.
"...You have a piece of functionality that you need to add to your system. You see two ways to do it, one is quick to do but is messy - you are sure that it will make further changes harder in the future. The other results in a cleaner design, but will take longer to put in place.
Technical Debt is a wonderful metaphor developed by Ward Cunningham to help us think about this problem. In this metaphor, doing things the quick and dirty way sets us up with a technical debt, which is similar to a financial debt. Like a financial debt, the technical debt incurs interest payments, which come in the form of the extra effort that we have to do in future development because of the quick and dirty design choice. We can choose to continue paying the interest, or we can pay down the principal by refactoring the quick and dirty design into the better design. Although it costs to pay down the principal, we gain by reduced interest payments in the future.
The metaphor also explains why it may be sensible to do the quick and dirty approach. Just as a business incurs some debt to take advantage of a market opportunity developers may incur technical debt to hit an important deadline. The all too common problem is that development organizations let their debt get out of control and spend most of their future development effort paying crippling interest payments."Technical debt is something that we should always bear in mind irrespective of the economic climate (although it is obviously particularly important at present!). Given the common trend to apply quick and dirty fixes (making the minimum monthly payment!); I have little confidence that many IT organisations take a blind bit of notice. The fundamental problem comes down to corporate DNA and the working culture of the IT organisation.
Let us explore the financial debt analogy and equate the practice of paying off technical debt with clearing credit card debts at the end of every month. Typically, people who follow this practice tend to be fairly organised and disciplined when it comes to financial management and it wouldn't be at all surprising to learn that they exhibit similar traits in other aspects of their lives. Similarly, organisations that are capable of managing technical debt well probably exhibit a disciplined and mature approach to the management of other aspects of their business too. Therefore in the same way that an individual manages their credit card is probably a fair indicator of the efficiency to which they manage their general finances, so an organisation's management of its technical debt is likely also to be a good indicator of its approach to IT management as a whole.
Furthermore, just as many of us don't actually manage our credit card debts I doubt that there are very many organisations that are effective when it comes to the management of technical debt either. Given the current (and historical) performance of many IT projects in many organisations it's highly likely that they have amassed a significant technical debt that is unwieldy and poorly managed.
This is more than a hunch: there is evidence to suggest that technical debt is building. For example,
Original Software recently released the results of a survey that showed that "40% of CIOs admit corporate indifference to their own software quality". This suggests that effective management of any technical debt is unlikely to be a concern to a sizeable percentage of organisations. The stark truth is that most businesses are simply too lax and lack incentives to monitor and, more importantly, manage their technical indebtedness. There is already ample evidence to demonstrate that all manner of cost savings and productivity benefits can be achieved if organizations just did things right with the support of appropriate tools. But we still see significant IT failures and IT teams still end up fire fighting for any number of reasons, with some even attaching a certain amount of macho cachet to their ability to fighting fires rather than preventing them in the first place. Clearly, conditions are ripe for building up an unhealthy technical debt.
That said, I definitely agree with
Bob McIlree when he states that there are lots of valid reasons for incurring technical debt. But I would add that in doing so, organisations need to apply a healthy dose of risk analysis and be their own 'bank manager', working out the likelihood of the debt being paid off in an acceptable timeframe. This will require organisations to look at the history of IT successes and failures and to assess the assets they have at their disposal to ensure that the technical debt is paid off. For example:
- take a cold hard look at the effectiveness of IT processes
- ascertain the discipline and strength of character of the IT organisation to communicate sensible and pragmatic solutions to demands made by the business
- determine what tools are in place to prevent the debt spiralling out of control
The other factor to consider is where the technical debt is being incurred. How critical is the solution and what are the implications if the debt isn't paid off, or worse still, if it increases? Is it likely to cost you your 'business'? The examples of toxic technical debt highlighted by Bob together with the actions outlined above highlight the dangers and provide a good guide as to the likelihood of an IT organisation being prone to mismanaging their technical debt.
Ultimately, it always comes back to maturity and discipline in every sense of both words. If your organisation is one that has invested soundly to put in place the right tools and policies to effectively manage your software delivery process (even if it is outsourced); and if takes calm measured steps and thinks long term, then it is likely to be able to effectively manage technical debt.
If your organisation isn't like that then the following need to be in place:
- quality management facilities that are integral to all phases of the software delivery process;
- a robust intelligence, monitoring and analytics frameworks that is tied to key KPIs, SLAs and other relevant metrics;
- effective collaboration amongst the software delivery team and with business units; an integrated framework of (most likely heterogeneous) tools;
- an end-to-end software delivery process, support by appropriate tools, that helps to clearly identify the business requirements that are driving the software development.
It perhaps comes as no surprise, given the above, that Application Lifecycle Management (ALM) should help to alleviate technical debt (or at least smooth out the payments).
I recently spoke with
Clive Howard of
Howard Baines who, like me was unaware of "technical debt". He wishes he had been as he's had debt management conversation with clients on many occasions.
"None of the stats surprise me. I think the situation gets worse as development becomes "easier" and faster due to mainly better tools (anyone can build an application with development tools like Visual Studio 2008, it just wouldn't necessarily be a good application).
The nuances of development can be completely lost on many clients. In some cases made worse by clients thinking that they know better and opting for the cheaper path. It is always frustrating the way that well architected and coded applications deteriorate over time due to sloppy decisions later when shortest timescales and lowest cost become the immediate priority with no thought to the additional time and cost later."If organisations actually consider upfront capital expenditure together with long-term operational expenditure, perhaps they will have a better handle on understanding the true nature of technical debt.
Labels: ALM, development, technical debt
Software Delivery InFocus podcast - ALM challenges and direction in the real world
Following the first Software Delivery InFocus podcast which we published in September, October sees Bola Rotibi's second podcast episode, in which she discusses a series of questions focused on the topic of Application Lifecycle Management (ALM). Her guests are John Leegte (ICT Architect at the Dutch Tax and Customs department, Belastingdienst) and Steve Jones (Head of SOA and SaaS for Capgemini’s global outsourcing business).
Application Lifecycle Management (ALM) is a topical subject that has garnered significant column inches in recent months, as many of the leading players in the market have launched new versions of their ALM solutions, and make strategic announcements concerning future directions and customer services and support. Over the last few months we have either heard about or seen previews of products from the likes of Borland, Compuware, HP, IBM, Microsoft, MKS, Polarion and Serena, to name but a few. Software is seen by many organisations as a key enabler of business value - whether that be through improving operational efficiency, competitive differentiation or business/ product innovation. With this in mind, an ad hoc approach to software application lifecycle management cannot provide the predictability, visibility and traceability that organisations require of a process that has such a significant impact on the balance sheet. So - how relevant and applicable is ALM now and in the future, particularly in light of today's technology and business environments, when issues such as agility are so much to the fore?
The episode is slightly longer than normal (clocking in at 41'55"). There was so much good material in the conversation, we didn't want to cut anything! Thanks very much to both John and Steve for such a great conversation.
You can download the audio
here or alternatively you can
subscribe to the podcast feed to make sure you catch this and all future podcasts!
As with all the episodes in this podcast series, we've also published a companion report which summarises the discussion and "key takeaways". You can find it
here, and it's free to download for all MWD's Guest Pass research subscribers (
joining is free).
Labels: agile, development, podcast, SOA
A new MWD FM podcast series: Software Delivery InFocus
After an extended hiatus, we're relaunching our podcasting efforts with a planned series of discussions focusing on the challenges and issues associated with software delivery processes and competence in enterprises. We've called this podcast series "Software Delivery InFocus", and it's hosted by Bola Rotibi, MWD's Principal Analyst for Software Delivery. Each podcast in the series will feature Bola and one or two guest commentators.
In this 33'06" podcast episode Bola discusses a series of questions focused on the issue of making the right technology choices. Her guests are Alan Zeichick (Editorial Director at SD Times) and Clive Howard (Founding Partner of Howard/Baines, a web development consultancy).
In an environment where software is everywhere and increasingly business critical, but where new technologies and approaches appear on the horizon at an alarming rate - when organisations look to carry out projects, are the right technology choices being made, and if not, why not? And who's to blame? What can organisations do to help them make better technology choices?
You can download the audio
here or alternatively you can
subscribe to the podcast feed to make sure you catch this and all future podcasts!
As with all the episodes in this podcast series, we've also published a companion report which summarises the discussion and "key takeaways". You can find it
here, and it's free to download for all MWD's Guest Pass research subscribers (
joining is free).
Labels: alignment, development, podcast
The dilemma of "good enough" in software quality
I recently finished some research on the cost and quality benefits of upfront, automated code and design review (i.e. prior to and during the development and build process) through static analysis. One of my conclusions was that the case for "good enough" is no longer good enough in delivering software. But some might argue that this viewpoint is promoting the notion of static analysis as a means of perfecting something - "gilding the lily" - rather than an essential tool for knowing whether something is "good enough".
Not so. In many fields, it is widely accepted that it is not necessary to apply a rigorous quality or testing schedule without first understanding what is at stake, what the risks might be in the event of failure and the severity of those risk. And from there deducing the level of, and appropriateness of, testing techniques, processes and tools to ensure that what is delivered is fit for purpose or "good enough".
In software delivery I would argue that too often the desire to deliver something quickly - especially to meet a deadline, however ambitious or unrealistic - overrides the key question of (fully determining) whether what is being delivered is actually "good enough".
The attitude of 'good enough' has been hijacked as an excuse for "sloppy" attitudes and processes and a "lets suck it and see" mentality. Surely such an attitude cannot continue to exist in delivering software that is increasingly underpinning business critical applications?
It is, you could say, a problem of managing expectations. A one star hotel probably offers adequate and "good enough" services for those on a budget. But this would not be sufficient for five-star luxury seekers. The key, though, is that customers of each know what they are getting for their money and whether it is fit for their purpose. There is a quantifiable means of grading what is delivered and matching that to what is expected.
Software technology advances allow varying levels of sophistication and capabilities and enable much more to be achieved. Software is increasingly becoming deep rooted in every aspect of our modern lives. New business models are being founded on applications and systems developed with many of these new technologies and approaches. If organisations start to restructure their working practices around applications and systems which rely on the new generation of communication and collaboration technologies and approaches, then failure due to poor code or application quality becomes even less acceptable.
The inclusion of rich media and visuals; the push for greater collaboration through the Internet; and unified communications for richer interactive social or work activities, means that any failure in such services would not only have the potential of creating higher levels of frustration - it would reduce productivity more sharply. On top of this, company brands become more easily exposed to damage.
Increasingly when people buy or use software they expect, if not 5-star performance, then performance and quality that is at least "good enough".
How many companies can rigorously look at their testing programs and say that is what they are delivering?
If you set out to deliver something that is important then applying the sloppy, suck-it-and-see "good enough" mentality just doesn't stand up.
What I find incredible after all this time - given the weight of evidence and eminent studies on the cost savings and the growing complexity and importance of software in our modern lives - is that the "sloppy" mentality and attitude still holds such sway in software delivery processes.
Many organisations don't spend nearly enough effort on improving the quality of the software they produce. More often than not they pay lip service to the concept whilst secretly holding the belief that it is a waste of resources (time, staff and money). As I stated at the start, the reality couldn't be further from the truth. In a future blog and report I will examine and discuss the evidence in more detail.
Clearly there is more to this debate: especially as we place a greater reliance on software and grapple with the growing pressure to release new features and functions more quickly both to differentiate in the market and to gauge users' reactions and acceptance.
Is it better to just get something out there that is of reasonable quality and worry about more deep-rooted bugs later, since it is a well known fact that software is never flawless?
This certainly is the option taken by many in the business of delivering software who have met with varying degrees of success and, if you are on the receiving end, - frustration, disappointment and loss.
In the end it boils down to one's interpretation of "good enough" and the answer to the intriguing questions: who and what should dictate what constitutes "good enough" software quality and how do you go about governing it?
If you are looking to answer these questions you will need to adopt a combination of strategies:
- You will need the support of good processes. Static analysis is certainly one way of improving the processes and being efficient with it. It is a tried and tested means for progressing software quality, predictability and lowering software costs. However, it is not the only tool for software quality. It is part of an arsenal of tools and processes needed for a wider and more lasting, cost effective approach to delivering quality software.
- You will need a clear vision, and an understanding of the goals (business and technical) to establish whether something is actually "good enough".
- You will need to put in place a connected tools and communication framework to underpin and support your governance process.
Labels: development, MWD, Software Quality, Static Analysis, Testing
Embarcadero rescues CodeGear
In our
most recent report, our new analyst
Bola Rotibi looks at Embarcadero's recent acquisition of CodeGear (the Borland subsidiary that it's been trying to offload for many months) - and asks: has Embarcadero made a smart move or a stupid one, and what does it mean for organisations looking at investing in development tools?
You can
download the report if you have access to our guest pass research: if you're not a member, it's easy - you can
register for free.
Labels: borland, codegear, development, embarcadero, MWD
A week of firsts
You might think having been a senior analyst for 8 years that I'd have seen most things. Well, this has definitely been a week of firsts for me.
My first ever
JavaOne conference; my first week in joining MWD as a principal analyst covering the application delivery and lifecycle management market (moving from 8 years at Ovum) and finally my first blog entry.
I accepted Sun's invitation to JavaOne this year because rumour has it that interest in the conference and support for Java is waning, and I wanted to see for myself just what was going on.
To be honest, I've never given much credence to such hyperbolic scaremongering, and what I've seen over the last couple of days merely backed that up. There's no doubt that Java's progress has been and continues to be marked with difficulties: controlling interests and agendas, delays, confusion, swerving focus and industry bickering. However, this is to be expected of a technology that has been successful and widely adopted.
Java is a mature technology that has many masters, spawned a number of lucrative revenue streams, opens many doors and is consumed in many different ways. The competitive alternative –
Microsoft's .NET environment, although just as formidable, is beset with similar issues and one or two harder challenges.
That's the good news. The bad news is that Sun's role and involvement from technology, market and management perspectives alike has been opaque at best.
Sun has never been particularly clear about how it actually makes money from Java or indeed maximising the opportunities. This doesn't really look like changing in the future.
For all that, I have enjoyed these past few days at the conference and gained a good deal of valuable insight; some disturbing, some surprising, others anticipated. Rumours of the conference's lack of importance and influence are, in my view, premature, and I will share my thoughts with you in future postings.
Far from what I was expecting, there has been a general air of optimism at the conference.
In ending this blog post I find myself with two regrets:
Firstly, the sheer number of interesting and enticing presentations made it inevitable that I should miss more than I could attend. Those that I did get to which I found particularly compelling, and would certainly recommend anyone getting the presentation materials or podcasts / webcasts of the sessions, were:
"Sun Mobility General Session – Java wherever you are" (the information delivered was certainly interesting and a good insight into JavaFX mobile development - and it's clear that Sun is after the same market as
Microsoft and
Adobe in this space); and
"Real World, Real Time, Instant Results: Make Information work for you" presented by
Jeff Henry of IBM (very interesting, insightful and for the most part non-partisan). I was booked on, but missed,
"Service-oriented Architecture and Java Technology: Level-setting standards, Architecture and code" delivered by Steve Jones and Duane Nickull. By all accounts this had some good insight and valuable information from guys with a lot of end user and real world interactions. The other sessions I wanted to attend but they clashed were
"The many moons of Eclipse" and
"Case Studies from the JavaFX technology world".
My second regret is not having attended JavaOne during the past eight years as a senior analyst, if only to have seen it in its heyday when Java was the exciting new kid on the technology block and firms were rushing and fighting to be part of the show.
Given the size of the big hall and the number of organisations exhibiting I would definitely say that whilst veterans of the show might argue that the volumes are not up with its peak years (early 2000s) the show still maintains enough of an influence to entice the great and the good in this market and plenty of start-ups and innovators.
JavaOne, in my opinion, is still an incredibly important and very necessary conference. My worry is that it becomes increasingly a mouthpiece for Sun rather than a standalone entity.
Over the coming weeks and months, I am going to be writing a lot more about the state of the development market and taking a closer look at the value of some of the underlying technologies and products. I welcome any comments and questions and look forward to readers getting in touch to further the debate.
Labels: development, Java, JavaFX, JavaOne, mobility, Sun
The lore of averages
I was chatting to a friend who's a top-notch Java developer over the weekend: we were shooting the breeze about Groovy, Rails, Spring, Hibernate and various other Things That Get People Excited (let's call them TTGPEs), and discussing how far they were likely to penetrate into your average IT shop. "Why do so many people insist on following the J2EE application model and associated patterns so slavishly," said my friend, "when they're so plainly not the right tool for the job in so many scenarios?"
"The thing that you never get from reading development books," I said - he'd just finished showing me a book on Groovy - "is how difficult it can be for your average IT shop to get on board with a new development technology, when you take commercial considerations into account. You can see from looking at code samples that language A is more compact or give you more productivity than language B. But what you can't see is the bigger picture of costs and risks."
I remembered a post of Steve Jones' I'd seen a couple of months back about
development as a discipline for the masses - and also
this one from Jeff Schneider about the value of SOA governance.
You see, the problem for your average IT shop in taking on TTGPEs is that even when they have been demonstrated to save time and/or money, there are two real barriers to adoption. Both barriers exist primarily because these shops have no option but to see development resources as a commodity.
First, within an average IT shop - think of one within a small utility provider or a local government - the business can't make a case for paying top whack to hire the very best developers. So, they have to shoot for the "mass market" of developers - hopefully capable and dependable, but not likely to be stellar performers. They also don't have a lot of time or money available for recruiting, so they tend to minimise the complexity of interviewing as far as possible - asking for "industry standard", well-recognised skills. Unless they can find TTGPE skills within that "mass market", they're not going to consider bringing those skills into the organisation. J2EE skills are now mass-market skills. Groovy skills aren't (yet).
Second, within such an IT shop, work tends to follow those same "industry standards", because the risk of doing TTGPEs is that if people leave or get sick, and new people have to be brought in, they have to be able to get new resources up-and-running quickly. If new staff have to spend weeks or months trying to re-engineer glamorous but unknown technology before they can continue a project, that's a huge, ugly cost.
Regardless of whether J2EE is increasingly being revealed to be more like a VW crossed with a tractor than a Ferrari, then, the truth is that most people have little choice, for now, but to stick with it and make the best of things.
Labels: 4GL redux, development, governance, industry
New On The Radar report: Erudine
Those of you with an interest in model-driven development and legacy system renewal might want to take a look at
our latest On The Radar report, which focuses on Erudine.
Erudine is a 55-person company headquartered in the UK whose core technology uses case-based reasoning techniques to provide a highly abstract software development environment and associated runtime engine. It should be of interest to you if you are dependent on data- and rules-intensive legacy applications, such as billing and scheduling, where the resources (both technology and personnel) required for ongoing maintenance and development are scarce.
Labels: development, Erudine, legacy migration, MWD
The SOA tool pyramid
I've had a bit of a graphic spurt (as it were) and so here's another blog post based around a diagram.
I was talking to a journalist a couple of weeks back about the kinds of functionality that customers need to look for when looking for tooling for SOA initiatives, and which vendors provide which groups of functionality. It's not always easy to explain this kind of thing over the phone, so I thought I'd have a go at describing the main areas of functionality as a pyramid. Something like this:

In our assessments of SOA tool vendors' capabilities (see
here for an example) we highlight nine separate areas of functionality, but this is a simpler picture that just focuses on four:
- Service enablement - this is functionality that helps you take existing IT assets (applications, databases, etc) and create service interfaces based on the capabilities they offer. A lot of vendors provide facilities in this area because in truth most of them started out as integration tools vendors.
- Orchestration and composition - this is functionality that helps you aggregate services and create "composite services" or "processes". Most vendors offer some capability along these lines, and most involve the ill-named "BPEL" in some way (but that's another story). The reason is the same as the reason above: many of the SOA tooling vendors had "pre-SOA" offerings which allowed you to aggregate and orchestrate resources from existing applications and systems.
- Lifecycle management - this is all about supporting development, integration and operations teams in linking their efforts to ensure that the consumer service experience is high-quality and consistent under potentially unpredictable circumstances. Typically the foundation of this capability is some kind of registry/repository, but ideally tools go further than this - firstly by helping to automate team workflows for implementing quality controls at design time; and secondly by helping to translate design intentions relating to operational SLAs into runtime policies which are tied into the infrastructure. Some vendors are starting to offer capabilities in this area, through acquisition (HP/Mercury/Systinet, webMethods/Infravio, BEA/Flashline (kind of)); OEM/resale agreements (Oracle/Systinet, BEA/Systinet) or in-house development (IBM, Sun).
- Service development - this is about the ability to design services "from scratch", or to design services where any existing applications/systems offer functionality which only partially fulfils a requirement. Ideally this starts "contract first" - first of all documenting what the service needs to do and the commitments the provider should make to service consumers; and only then refining that spec into a working service implementation and interface.
Most SOA tools vendors suck at this last bit, frankly. I think that TIBCO is starting to do provide some interesting supporting facilities for this broad area with
ActiveMatrix, and as vendors start to implement
SCA/SDO in their tools the situation might get better across the board. In the meantime if you've heard of a vendor targeting SOA specifically that really provides solid tools to help with this kind of contract first" development approach, I'd love to know.
Labels: BPEL, contracts, development, SCA, SOA, tools