advising on IT-business alignment
IT-business alignment about us blog our services articles & reports resources your profile exposure
blog
blog
Saturday, November 29, 2008

Links for 2008-11-28 [del.icio.us]

Labels:

Friday, November 14, 2008

On SOA governance: for SOA, read CPOA?

A couple of weeks ago I was the happy recipient of a review copy of the excellent Todd Biske's SOA Governance book. Todd's "Outside the Box" blog is one of those rarities where every post is worth reading twice - so I was very interested to see whether his writing ability might stretch to something the length of a book! Todd's clearly established himself as someone who has a lot of insight on the topic of SOA Governance, so I was pretty sure I wouldn't be disappointed.

A number of other bloggers have posted detailed reviews of Todd's book, so I'm not going to do exactly the same here. Take a look at the Amazon comments if you'd like to see what they said.

For me, I'll be brief: SOA Governance is a very good book indeed, in that it does something that so many technology and business management books fail to do: it breaks a complex and hype-laden subject down into very manageable chunks, and walks through the topic clearly and at a steady pace - but it still manages to move quickly enough to prevent the reader getting bored. It's not a perfect(*) book, but then nothing is - and if it had been, I would have been too jealous to write this. We need more technology/business management books like this, and we needed just such a book on SOA Governance. Well done Todd!

I knew this was a good book because it made me revisit some conclusions I'd already had washing around in my own head for a couple of years.

One of the things that I still find as I travel around is that when I get into discussions about SOA, there's way too much focus on the "S" and not enough focus on the "A". It's almost as if we've been blinded by technologies and standards which have "service" somewhere in their names, and aren't able to look at the bigger picture.

What Todd's book reminded me is that if you want to get real value out of service orientation, then it's the "A"rchitecture that really makes things happen. Todd's narrative keeps coming back to his definition of Governance, which revolves around People, Policies and Processes. And it also talks a lot about the concept of "contracts" in the context of analysing how service providers and consumers should work together in order to interact. Without People, Policies and Processes in place to guide your organisation down the right path, and without the concept of "Contract" to focus on the responsibilities that need to be described and assigned when service consumers and providers interact, such an architecture effort will likely lead nowhere. You'll end up with "just a bunch of services".

So - and this was the thought that occurred to me after reading Todd's book - perhaps we shouldn't really be thinking about "service" oriented "architecture" at all. It seems to me that what architects might find more productive to focus on is policies and contracts, not "services". Maybe "service" is better thought of as a concept that describes the outcome of this kind of architecture approach. And so maybe it's the case that there are two things in play here, and we're getting them mixed up: contract- and policy-oriented architecture (CPOA ;-) and service-oriented IT delivery?

What do you think?

(*) one thing I found rather strange was that despite a word at the front to reassure readers that they didn't need to know any technology detail in order to read the book, at a number of points you're suddenly confronted, out of nowhere, by XML fragments which (as far as I could tell) didn't really add any value. That's a tiny niggle though.

Labels: , , , ,

Are you capable of watching your technical debt?

Bob McIlree wrote an interesting blog at the start of October about watching the amount of 'technical debt'. The term is one that I had not previously encountered. Ward Cunningham first defined it back in 1992 and Martin Fowler provided an additional and perhaps slightly clearer perspective in 2003.

"...You have a piece of functionality that you need to add to your system. You see two ways to do it, one is quick to do but is messy - you are sure that it will make further changes harder in the future. The other results in a cleaner design, but will take longer to put in place.

Technical Debt is a wonderful metaphor developed by Ward Cunningham to help us think about this problem. In this metaphor, doing things the quick and dirty way sets us up with a technical debt, which is similar to a financial debt. Like a financial debt, the technical debt incurs interest payments, which come in the form of the extra effort that we have to do in future development because of the quick and dirty design choice. We can choose to continue paying the interest, or we can pay down the principal by refactoring the quick and dirty design into the better design. Although it costs to pay down the principal, we gain by reduced interest payments in the future.

The metaphor also explains why it may be sensible to do the quick and dirty approach. Just as a business incurs some debt to take advantage of a market opportunity developers may incur technical debt to hit an important deadline. The all too common problem is that development organizations let their debt get out of control and spend most of their future development effort paying crippling interest payments."


Technical debt is something that we should always bear in mind irrespective of the economic climate (although it is obviously particularly important at present!). Given the common trend to apply quick and dirty fixes (making the minimum monthly payment!); I have little confidence that many IT organisations take a blind bit of notice. The fundamental problem comes down to corporate DNA and the working culture of the IT organisation.

Let us explore the financial debt analogy and equate the practice of paying off technical debt with clearing credit card debts at the end of every month. Typically, people who follow this practice tend to be fairly organised and disciplined when it comes to financial management and it wouldn't be at all surprising to learn that they exhibit similar traits in other aspects of their lives. Similarly, organisations that are capable of managing technical debt well probably exhibit a disciplined and mature approach to the management of other aspects of their business too. Therefore in the same way that an individual manages their credit card is probably a fair indicator of the efficiency to which they manage their general finances, so an organisation's management of its technical debt is likely also to be a good indicator of its approach to IT management as a whole.

Furthermore, just as many of us don't actually manage our credit card debts I doubt that there are very many organisations that are effective when it comes to the management of technical debt either. Given the current (and historical) performance of many IT projects in many organisations it's highly likely that they have amassed a significant technical debt that is unwieldy and poorly managed.

This is more than a hunch: there is evidence to suggest that technical debt is building. For example, Original Software recently released the results of a survey that showed that "40% of CIOs admit corporate indifference to their own software quality". This suggests that effective management of any technical debt is unlikely to be a concern to a sizeable percentage of organisations. The stark truth is that most businesses are simply too lax and lack incentives to monitor and, more importantly, manage their technical indebtedness. There is already ample evidence to demonstrate that all manner of cost savings and productivity benefits can be achieved if organizations just did things right with the support of appropriate tools. But we still see significant IT failures and IT teams still end up fire fighting for any number of reasons, with some even attaching a certain amount of macho cachet to their ability to fighting fires rather than preventing them in the first place. Clearly, conditions are ripe for building up an unhealthy technical debt.

That said, I definitely agree with Bob McIlree when he states that there are lots of valid reasons for incurring technical debt. But I would add that in doing so, organisations need to apply a healthy dose of risk analysis and be their own 'bank manager', working out the likelihood of the debt being paid off in an acceptable timeframe. This will require organisations to look at the history of IT successes and failures and to assess the assets they have at their disposal to ensure that the technical debt is paid off. For example:

  • take a cold hard look at the effectiveness of IT processes
  • ascertain the discipline and strength of character of the IT organisation to communicate sensible and pragmatic solutions to demands made by the business
  • determine what tools are in place to prevent the debt spiralling out of control

The other factor to consider is where the technical debt is being incurred. How critical is the solution and what are the implications if the debt isn't paid off, or worse still, if it increases? Is it likely to cost you your 'business'? The examples of toxic technical debt highlighted by Bob together with the actions outlined above highlight the dangers and provide a good guide as to the likelihood of an IT organisation being prone to mismanaging their technical debt.

Ultimately, it always comes back to maturity and discipline in every sense of both words. If your organisation is one that has invested soundly to put in place the right tools and policies to effectively manage your software delivery process (even if it is outsourced); and if takes calm measured steps and thinks long term, then it is likely to be able to effectively manage technical debt.

If your organisation isn't like that then the following need to be in place:

  • quality management facilities that are integral to all phases of the software delivery process;
  • a robust intelligence, monitoring and analytics frameworks that is tied to key KPIs, SLAs and other relevant metrics;
  • effective collaboration amongst the software delivery team and with business units; an integrated framework of (most likely heterogeneous) tools;
  • an end-to-end software delivery process, support by appropriate tools, that helps to clearly identify the business requirements that are driving the software development.

It perhaps comes as no surprise, given the above, that Application Lifecycle Management (ALM) should help to alleviate technical debt (or at least smooth out the payments).

I recently spoke with Clive Howard of Howard Baines who, like me was unaware of "technical debt". He wishes he had been as he's had debt management conversation with clients on many occasions.

"None of the stats surprise me. I think the situation gets worse as development becomes "easier" and faster due to mainly better tools (anyone can build an application with development tools like Visual Studio 2008, it just wouldn't necessarily be a good application).

The nuances of development can be completely lost on many clients. In some cases made worse by clients thinking that they know better and opting for the cheaper path. It is always frustrating the way that well architected and coded applications deteriorate over time due to sloppy decisions later when shortest timescales and lowest cost become the immediate priority with no thought to the additional time and cost later."


If organisations actually consider upfront capital expenditure together with long-term operational expenditure, perhaps they will have a better handle on understanding the true nature of technical debt.

Labels: , ,

Saturday, November 08, 2008

Links for 2008-11-07 [del.icio.us]

Labels:

Thursday, November 06, 2008

The death of middleware

I've been spending a fair bit of time this week talking to a journalist I've known for years, Danny Bradbury, for a series of features he's writing on the middleware strategies of some of the big enterprise software vendors.

After our first chat, something suddenly struck me (probably very belatedly): when middleware is talked about and sold today, what is being discussed is completely different to the stuff I first learned and wrote about in the mid 1990s. The difference is all to do with the meaning of the word "middle".

In the mid 1990s, "middle" meant "the gaps between applications and software components". Middleware was technology you turned to in order to try to build distributed systems: we were faced with transaction processing middleware, database middleware, object middleware, and so on - all different forms of middleware optimised for supporting different kinds of distributed software development paradigms. Middleware was a technological concept.

But with the birth of the application server concept in the late 1990s, which consolidated a lot of the popular distributed computing patterns of the time, together with the rise of web protocols and open-source alternatives to commercial web infrastructure, the idea of "middleware" changed fundamentally.

Now, when you see most of the talk about "middleware", "middle" means "the gap in a technology stack between an operating system and packaged applications". Middleware is now defined largely by vendors from a software product marketing perspective, rather than by customers from a technology perspective. Consider the portfolios of IBM, TIBCO, SAP, Oracle: they all talk about "middleware stacks", but these things include process management, content management, master data management, and business intelligence tools - and sometimes even DBMSs. [As a side-note, Microsoft is interesting because to an extent, it's gone in the opposite direction - building more and more "traditional middleware" capability and avoiding talking up the big stack.]

Why is the changing nature of middleware conversations important? Because, as I've mentioned before, the people and organisations that influence the language we use to describe things often end up in the best position to control what gets bought, by whom. By redefining "middleware" as being about ever-growing portfolios of infrastructure software, the biggest software vendors we have end up crowding out the propositions of smaller specialists.

So - should we reclaim "middleware", or should we just let it die a natural death?

Labels: , ,

Notes on PDC: Windows Azure

There were always going to be high expectations for Microsoft's 2008 Professional Developer Conference (PDC). This was the first PDC without Bill Gates at the helm, and let's also not forget that the PDC event scheduled for 2007 was unceremoniously cancelled at the last minute - fuelling speculation that Microsoft's product roadmap was in the process of being torn up. So, in 2008, the key question on everyone's lips was: would the event signal a Microsoft back on track with its developer story?

It turns out that 2008 has been a pretty solid year for Microsoft in terms of developer technology delivery, with the release of Windows Server 2008, Visual Studio 2008, SQLServer 2008, Hyper-V and Sliverlight v2. And at PDC 2008, in short, Microsoft did what it had to do. It provided insight into its strategy and an outline solution set for a cloud computing era; made good some of the mess that was the release of Windows Vista; and showed the world that even with Bill gone, there's still a strong management team with leadership vision and product foresight in place.

I'm going to tackle these points over a couple of blog entries. First, here, I'll tackle Microsoft's cloud computing strategy and the newly-announced "Windows Azure" initiative.

After a slightly slow start, with Windows Azure, Microsoft has now placed a strong bet on cloud computing and cloud-based applications. The company now believes that "the systems for cloud computing will be setting the stage for the next 50 years - with new patterns and new models of deployment, and application models for a world of parallel computing."

It was interesting to hear the company praise Amazon's innovation and exploration in this field with its EC2 offering. However, Microsoft is of the opinion that ultimately, it'll be in a better position to offer a more comprehensive end-to-end service portfolio than Amazon - owing to the fact that it can leverage strong pre-existing market positions with development tools, management solutions and server environments. As a side-note, of course, it's worth remembering that Amazon and the other "cloud innovators" counter this position by saying that new computing models don't have to rely on old tools and skills - indeed, disruptions can (and sometimes should) bring new tools and techniques that are most suited to the job in hand. When Microsoft was at the forefront of the shift towards client-server systems and away from mainframes, we don't remember it championing COBOL or CICS on the desktop.

Azure, Microsoft's cloud-based service solution, will be a hosting platform for applications and services that can be built by Microsoft, Independent Software Vendors (ISV), service providers and customers using a combination of Live, .NET, SQL, SharePoint and CRM services. Azure is designed to deliver services that can be leveraged rapidly and easily, and will be delivered by Microsoft's data centres in the US and across the rest of the world. In line with its positioning vs. Amazon, it will use its existing development tools and the .NET framework as the developer entry point to the Azure platform. What's also interesting, and encouraging, is that Azure is not just for customers: Microsoft is also aiming to use Azure to host its own internal systems.

Of course, Azure was expected - and widely trailed. In the coming months, we expect that most, if not all, the major software infrastructure vendors and service providers with sizeable data centres will launch some form of "cloud based" strategy for their product and services portfolios. The company's strategy to leverage existing technology and products wherever possible could be a good move - in that it could remove a potential barrier to adoption, and is likely to please many of its existing customers. However, given the side-note above, the fact that Microsoft is sticking to its existing development technology framework for Azure isn't a guarantee of market domination. In the immediate term, though, the challenge for Microsoft, as always these days, will be to ensure that it can articulate its strategy and product direction precisely and clearly. The breadth of Microsoft's portfolio and the number of markets that the company covers means that it's all too easy for the company to confuse its audiences with stories and strategies that aren't "joined up".

There was one important missed opportunity in the Ray Ozzie keynote which sketched out Azure, and it was an opportunity to explain the technical, regulatory and legislative demands that developers would likely have to meet in building application services for deployment on Azure - and to explain how Microsoft would help developers with the associated challenges. The issues was skated over very lightly, and this was something that a lot of people were expecting to hear about.

Many organisations are likely to struggle with implementing cloud-based services, and not only because of the technical challenges: there's an architecture and planning question to be addressed, too, which at the moment is not receiving as much attention as it might. The question is not about how to build services, so much, but *what* services to build, and *why*. This is a question that many organisations already struggle with in the context of SOA - which is one of the reasons why most SOA efforts today are still tightly constrained project-level efforts dealing largely with application integration use cases. For all these reasons, we expect the primary targets for platforms like Windows Azure - at least in the short term - to be ISVS and Service Providers rather than enterprise development shops.

Labels: , , ,

Wednesday, November 05, 2008

Links for 2008-11-04 [del.icio.us]

Labels:

Tuesday, November 04, 2008

Links for 2008-11-03 [del.icio.us]

Labels:

Saturday, November 01, 2008

Links for 2008-10-31 [del.icio.us]

  • A typology of network strategies
    Sparked by some comments from Tim O'Reilly about the role of the "network effect" for online businesses, Nick Carr has defined a variety of approaches, from data mining (a la PageRank) to sharecropping (a la Digg).

Labels:


Burn this feed
Burn this feed!

Creative Commons License
This work is licensed under a Creative Commons License.

Blog home

Previous posts

Normal service will be resumed shortly
Links for 2009-07-02 [del.icio.us]
Seven elements of Cloud value: public vs private
The seven elements of Cloud computing's value
Links for 2009-06-09 [del.icio.us]
Links for 2009-06-02 [del.icio.us]
Links for 2009-05-27 [del.icio.us]
Links for 2009-05-20 [del.icio.us]
Micro Focus gobbles Borland, Compuware assets
Links for 2009-05-05 [del.icio.us]

Blog archive

March 2005
April 2005
May 2005
June 2005
July 2005
August 2005
September 2005
October 2005
November 2005
December 2005
January 2006
February 2006
March 2006
April 2006
May 2006
June 2006
July 2006
August 2006
September 2006
October 2006
November 2006
December 2006
January 2007
February 2007
March 2007
April 2007
May 2007
June 2007
July 2007
August 2007
September 2007
October 2007
November 2007
December 2007
January 2008
February 2008
March 2008
April 2008
May 2008
June 2008
July 2008
August 2008
September 2008
October 2008
November 2008
December 2008
January 2009
February 2009
March 2009
April 2009
May 2009
June 2009
July 2009

Blogroll

Andrew McAfee
Andy Updegrove
Bob Sutor
Dare Obasanjo
Dave Orchard
Digital Identity
Don Box
Fred Chong's WebBlog
Inside Architecture
Irving Wladawsky-Berger
James Governor
Jon Udell
Kim Cameron
Nicholas Carr
Planet Identity
Radovan Janecek
Sandy Kemsley
Service Architecture - SOA
Todd Biske: Outside the Box

Powered by Blogger

Weblog Commenting and Trackback by HaloScan.com

Enter your email address to subscribe to updates:

Delivered by FeedBurner