advising on IT-business alignment
IT-business alignment about us blog our services articles & reports resources your profile exposure
blog
blog
Thursday, September 25, 2008

Links for 2008-09-23 [del.icio.us]

Labels:

Wednesday, September 24, 2008

SOA governance and data governance - separate or one in the same?

Joe McKendrick (once again!) has another post which caught my blogreader today. This time he is pondering the relationship between SOA and data governance:

If data governance is inadequate — information is outdated, out of sync, duplicated, or plain inaccurate — SOA-enabled services and applications will be delivering garbarge. That’s a formula for SOA disaster.

He goes on to reference an article by Ed Tittel, which draws the same conclusion:

Amidst all the hype and buzzwords that surround SOA nowadays, it's still far too common for organizations that seek to integrate service-oriented architecture into their IT infrastructure to omit issues related to data integration, management and governance in their designs. As they roll out and learn to live with an SOA, however, they often discover that interoperability with other systems and solutions poses interesting problems. In fact, these problems can make interaction between systems and SOA components both vexing and time consuming.

Absolutely! When we put together our SOA Strategy Planning Tool, we explicitly acknowledged the importance of a common information model that provides:

standard representations of core information types for communication between services

Where I deviate from Joe and Ed, however, is their perspective that data governance and SOA governance are separate disciplines. Without inter-service communication there's no SOA and so SOA governnance must encompass data governance. Furthermore, that governance needs to extend beyond service design throughout the service lifecyle.

In a subsequent post, Joe calls out this post from David Linthicum in which he noodles on the same topic. I am not so sure about David's view that SOA initiatives:

need to start with the data first


It all depends on the scenario where a service-oriented approach is being applied. However, I agree with him that there is a need to understand:

the core purpose of the data, how it relates to other data, how the data is bound into entities, as well as security issues, integrity issues, and the binding to existing functions or transactions. I would go further and say that it's not just about understanding those things. In the case of security and integrity issues, there is a need to ensure that what is understood is enforced. That means defining service contracts that take account of those requirements and enforcing them through policies.


Which brings me neatly back to the SOA versus data governance discussion. Policies are the lingua franca of SOA governance and policies apply as much to the data flowing in a service network as they do to the services themselves.

If you are embarking on an SOA initiative you need to ensure that those responsible for SOA governance, ideally though an SOA centre of excellence, include individuals with data management expertise. Your governance processes should enforce utlilisation of a common information model and encompass a policy-based approach to ensure that data management objectives and constraints are enforced.

Labels: ,

Hmm indeed Mr McKendrick - that should be "an over-simplistic definition of 'SOA'"

Joe McKendrick over at ZDNet has been pondering CIO "SOA Advisor" Nicholas Petreley's definition of SOA:

a networked subroutine

No wonder Joe's not sure about it! Nicholas' definition is closer to that of a web service and even that's being generous!

Joe rightly points out that the definition completely ignores the 'A' of SOA and comes up with his own alternative to address that limitation:

an architecture of services

But that suggests that SOA is something that you build - an end-product. It isn't. Architects in the real world don't create architecture: they do it! That's why they are called architecture practices.

Back in May 2005 in our first report on SOA, which set out to provide a big picture view of SOA, we proposed the following definition:

SOA is the disciplined approach through which an IT organisation manages the lifecycle of IT services, and assures their delivery, in a way the reflects business process priorities

As our SOA definition suggests, we see the ultimate value of SOA as shaping a service-oriented perspective on IT delivery, from top to bottom in the IT organisation. Service orientation can apply just as much to business architecture as it can to solution and technical architecture. It may not be quite as simple as Nicholas' definition but I think it more accurately reflects how thinking about SOA has moved on since the heady days of "SOA as software development and integration based on web services"

Labels:

Links for 2008-09-22 [del.icio.us]

Labels:

Friday, September 19, 2008

Links for 2008-09-17 [del.icio.us]

Labels:

Thursday, September 18, 2008

Links for 2008-09-16 [del.icio.us]

Labels:

Wednesday, September 17, 2008

Links for 2008-09-15 [del.icio.us]

Labels:

Saturday, September 13, 2008

Ignore the spin: Microsoft's membership of the OMG is good news for all concerned

Tony Baer's one of the analysts who's picked up on Microsoft's recent announcement that it's joining the OMG and backing UML and BPMN. His post is pretty interesting and outlines some of the relevant history - particularly relating to DSLs and the OMG's UML. But I'd like to add to that, and talk a bit about what the announcement means to the IT industry and the wider community of enterprise software developers.

Ignoring the "bringing modelling to the mainstream" spin that Microsoft has put on the announcement, Microsoft joining the OMG is a good thing for all concerned. Modelling is already a mainstream activity to most involved directly with the production of software, whether in the software vendor community or the wider software developer community. What it isn't, for the most part, is consistent, fully-integrated or shared within and across organisations - or seen by enterprises as something with real strategic importance.

Modelling holds great powers for shared communication between stakeholders. Through the power of abstraction and collective representation, model-driven development is an efficient and effective way of communicating requirements, goals and outcomes against the backdrop of existing constraints and platforms, and then automating the activities of the delivery process (i.e. development, testing and production) to ensure that what is deployed works how it was intended to (and to a sufficient level of quality).

So Microsoft's commitment to the OMG, simply put, finally gives model-driven software development a truly united voice - and also an united industry body for driving the education and strategic importance of modelling to the wider community. The environment and process framework created and managed by the OMG for collaboration, sharing strategy, and generating best practices that ultimately get incorporated into standards, has strong input from end user organisations and commercial vendors alike. A supply community that is united behind a common industry body is an important criterion for helping to drive modelling and model-driven development being seen as strategically important activities beyond the confines of the software vendor community.

Ultimately this announcement says a lot about how far both Microsoft and the OMG have come (in equal measures), the importance of modelling for the future of software for all concerned – enterprises in particular – and a recognition from Microsoft that it does actually need the OMG. The OMG needs Microsoft too (but perhaps a little less so in my opinion).

Microsoft joins the OMG with proven technology and a vision that makes it as good a first-class citizen as IBM with its Rational toolset and strong contribution to modelling technology (an aside: the Microsoft-OMG "war" was never really about the OMG per se, but rather about IBM Rational's dominance). Microsoft has realised the power and importance of UML and the wide adoption within the market. As Tony correctly mentioned, this was vital if Microsoft was to progress with Oslo.

The outcome here is that debate (and vital energy) is no longer focused on the political and market agendas of the modelling tool vendors. Don't get me wrong, there are still political and market agendas in play, there always will be. But for the meantime and in general these will play out more "behind the scenes" than before - and that has to be good for everyone.

Labels: , , , , , ,

Links for 2008-09-11 [del.icio.us]

Labels:

Friday, September 12, 2008

IBM, Business Event Processing, and CEP: behind the bag of spanners

Earlier this week I attended an IBM press and analyst summit on the topic of "Business Event Processing". To coincide with this, the company made some announcements on its "BEP leadership", with over 3700 "BEP customers". This is fairly early days in IBM's attempts to tell a coherent story about what it's doing in event processing, although it's been helping customers for many years prior to the January 2008 acquisition of AptSoft (which catalysed IBM's use of the BEP term). However the company isn't doing itself any favours with language like this: it's all too easy for it to come across as disingenuous, or at the very least the result of a significant amount of Vaseline being spread on a camera lens.

Why? Well, it's a question of boundaries and definitions. Specifically, IBM defines BEP in the following way: (1) an event is "something that happens"; (2) a business event is "an event that has relevance to the business"; and (3) Business Event Processing (BEP) is "the correlation of heterogeneous events in order to achieve smarter business outcomes".

BEP is a term which has come about primarily through IBM's own marketing since the AptSoft acquisition. But a similar term, Complex Event Processing (CEP), preceded this by some significant time (it was popularised by David Luckham's 2002 book The Power of Events). With a nod to the debate that still simmers about what the CEP term should really signify (is it the processing of compound/aggregated, or "complex" events - or is it application of "complex processing" to events?), IBM's definition of BEP is very close indeed to what seems to be the pre-eminent definition of CEP. This is no accident: on a call to discuss the AptSoft acquisition, IBM's Sandy Carter explicitly called out her interest in trying to rename the "CEP category".

All's fair in love and software marketing, I suppose, even when it risks leaving everyone confused (most people are still trying to get their heads around CEP). The real problem I have with this, though, that it's not only a simple renaming that's at work here. When IBM then goes on to talk about the event processing work it's doing, it tells a story that's much broader than the AptSoft technology area, and also broader than what most event-processing folks would think of when they think of CEP. Likewise when IBM refers to "3700 customers" these aren't AptSoft customers (apart from a tiny minority of them). As well as focusing on WebSphere Business Events (the former AptSoft technology) IBM's view of BEP encompasses event-processing capabilities that exist in Tivoli/NetCool systems and network management products, WebSphere MQ Event Broker and MQ Low Latency Messaging, a new stream-based event processing platform called InfoSphere Streams (formerly "System S"), the SolidDB in-memory database, and many other bits and pieces as well.

In short, it looks like IBM's taking a term that only in January 2008 it used to describe a "business friendly" event processing toolset (the AptSoft technology), and is now instead applying it to any software technology that IBM offers which involves the processing of events. A cynic would say it's got a big plumbers' canvas toolbag marked "BEP", and is shoving as much into it as possible.

Now the intention of this post isn't just to bash IBM for attempting to subvert the CEP concept. The reason I don't want to leave things here is that there's more to what IBM's doing than meets the eye.

As the press and analyst day unfolded, it became clear (not through the presentations from the IBM execs, tellingly, but through Q&A and one-on-one meetings) that behind the scenes, IBM is actually trying to do something pretty interesting. It's got a big kit-bag of event-processing technologies, yes - but behind this, it's working to put together a common event processing technology framework that will allow its individual technology components to be assembled in different combinations to support different types of business and IT requirements. So, rather than having separate event processing stovepipes in the WebSphere, InfoSphere and Tivoli software portfolios, together with assorted miscellaneous other components, it's enabling all of these distinct efforts to cross-fertilize. One of the first outcomes of this work is the snappily-named WebSphere Business Events eXtreme Scale, which marries the former AptSoft technology (never marketed as a platform for high-volume processing) to the WebSphere eXtreme Scale platform.

With this in mind, I think it's a bit of a shame if IBM sticks to referring to its overall event processing architecture effort as BEP. BEP, like CEP and one of my other favourites Composite Application Development, is a "how" name - it says more about a method than its outcome. Or, in other words, although it might appeal to geeks, it says nothing about why anyone would invest in it. What would work much better, in my opinion, would be for IBM to complement (or even replace) its discussions about BEP with discussions more centred around a "what" idea that is more about the business value of event processing - what all these technologies, and the framework being built, actually enable customers to achieve. If IBM hadn't already walked away from the term, I'd suggest something like "on-demand business"... but seeing as that's verboten these days, I'll keep my thinking cap on. Perhaps "event-driven business"?

Putting all the vendor marketing stuff to one side, what does all this mean for enterprises? First, try and look behind the cynical BEP and CEP marketing stuff. IBM might be mis-stepping in its marketing, but you can be sure that it's serious about building a set of technology offerings to help customers with a variety of event-processing related problems and opportunities. Second, although algorithmic trading and other capital markets applications are where most of the technology market activity in this space has been over the past year or so, try and take a broader perspective of event processing and how it might offer value. Event processing is a useful computing approach, even when the event volumes aren't colossal, or the processing requirement isn't near-real-time, or the processing requirement isn't highly complex.

Lastly: watch out for a free Guest Pass report we'll be publishing on event processing before Christmas, where we’ll attempt to unpick the different scenarios and technology requirements in play.

Labels: , , , ,

Thursday, September 11, 2008

Links for 2008-09-12 [del.icio.us]

Labels:

Wednesday, September 10, 2008

ECM vendors collaborate on interoperability standard

Yesterday EMC, IBM, and Microsoft jointly announced Content Management Interoperability Services (CMIS) - a new specification designed to enable interoperability between content management repositories. The proposed standard, which was also being submitted to the open standards consortium OASIS yesterday, will create a common interface for accessing content stored in compliant repositories, simplifying the process of integrating business applications with enterprise content management (ECM) systems, particularly in a mixed environment with products from multiple ECM vendors - a situation that is common among enterprise organisations.

The core ECM focus areas for version 1.0 are collaborative content creation, and delivery of content through portals and mashups, with support for applications such as workflow/BPM, archiving, compound document management and electronic legal discovery to be built on top of the CMIS interfaces. The specification provides support for both REST- and SOAP-based interfaces.


The three primary parties in the development of the standard have been working on its development since 2006, and have since been joined by fellow competitors in the ECM space Alfresco, BEA/Oracle, OpenText and SAP.


I have to say that this is a welcome move by the ECM vendors - a standard of this kind is well overdue, and it is encouraging that so many of the leading players are on board. Clearly the implementation of such a (proposed) standard will not happen overnight - and approval of the standard by OASIS is not expected until the second half of 2009. However, we can expect the vendors involved to begin introducing CMIS-compliant code before then, especially since a key goal of the specification was to enable it to be developed as a layer that can sit on top of existing content repositories, rather than requiring them to be redeveloped from scratch (compared to the related JSR 170 standard, for example). In fact, the Alfresco website is already offering up its draft CMIS implementation for preview by the developer community.

A risk to the specification's success is that it falls into the same trap that befell the ANSI SQL standard. This provided a standard way of accessing data repositories, but allowed vendors to include their own "tweaks" which locked people in. The CMIS vendors acknowledge that CMIS is not trying to cover everything - for example security and administration is left to the individual applications - and clearly some products will have differentiating capabilities that are not covered by the standard, increasing the risk of deviation. However, despite this risk, CMIS is a positive step for the ECM market.


It is also worth noting that the standard has much wider implications than just ECM - certainly any organisations looking to implement collaboration technologies should keep an eye on the progress of the standard, and should also challenge their collaboration software providers on their plans, as CMIS should make it much easier to manage collaboratively authored content in the same way as any other organisational content.

Labels: , , , , , , , , ,

Links for 2008-09-08 [del.icio.us]

Labels:

Tuesday, September 09, 2008

Software AG goes in an interesting direction for SOA governance

As part of yesterday's release of the latest iteration of its webMethods Insight product Software AG announced an OEM partnership with Progress Software. This announcement adds the Actional runtime SOA management and monitoring technology (which Progress acquired back in January 2006) to Software AG's existing Centrasite design-time governance capabilities (which were bolstered by the acquisition of Infravio in September 2006) and the runtime policy enforcement provided by its webMethods X-Broker and partner Layer 7's XML Firewall.

The incorporation of runtime SOA management and monitoring functionality into Insight is a necessary evolution of Software AG-webMethods integration strategy that we commented on just over a year ago. It's long been our position that SOA is more than a standards-based approach to software development and integration. The business value of a service-oriented initiative depends on a recognition that software services are experienced, just like their real-world analogues. The quality of that experience depends on a governance approach that extends throughout the service lifecycle, where the contracts defined when services are designed are subsequently enforced through policies once they are deployed and running - and where runtime metrics are captured to provide insight into the service level quality that is actually exeprienced. Furthermore, those metrics can be used to inform and support change management processes, so closing the SOA lifecycle loop.

Whilst the announcement doesn't come as any great surprise, the source of the runtime management and monitoring functionality does. When Oracle confirmed it's intention to acquire BEA, I said:
It [the acquisition] leaves some of the other bigger specialist players - TIBCO, SoftwareAG (and to a lesser extent Progress and Red Hat) in an interesting position. On the one hand they will be more attractive, particularly for SOA and BPM, to customers looking for an application-independent infrastructure offering.
Software AG has gone to a potential competitor for the mantle of best-of-breed, specialist alternative to the likes of IBM, Microsoft and Oracle. If you had told me on Friday that Software AG was going to strike an OEM deal for SOA management and monitoring I'd have put my money on AmberPoint, which has historically been the OEM of choice for the likes of BEA and TIBCO.

I am not quite sure what to make of this decision. AmberPoint doesn't compete with Software AG directly and has established a healthy and growing customer base, as well as partnerships with some of the leading systems integrators - and a technology partnership with Software AG! Software AG's decision comes not long after Oracle's decision to drop AmberPoint. As we pointed out in our analysis of Oracle's roadmap for the BEA integration, we don't have any hard evidence for Oracle's claims that it had received negative feedback from BEA customers but it's something we will continue to explore. In light of the decision to go with Actional, it will be intriguing to see how the partnership evolves and how things pan out when Software AG and Progress are in a competitive situation.

This acquisition should be welcome news to Software AG customers that have invested in the company's SOA offerings as it will save them the time and effort of plugging the runtime governance gap that existed prior to the partnership. Those embarking on a significant SOA intiative should also give Software AG careful consideration, particularly if they are not wedded to one of the mega-platform providers.

Labels: , , , , , , , , ,

Links for 2008-09-07 [del.icio.us]

Labels:

Saturday, September 06, 2008

Links for 2008-09-04 [del.icio.us]

Labels:

Friday, September 05, 2008

A new MWD FM podcast series: Software Delivery InFocus

After an extended hiatus, we're relaunching our podcasting efforts with a planned series of discussions focusing on the challenges and issues associated with software delivery processes and competence in enterprises. We've called this podcast series "Software Delivery InFocus", and it's hosted by Bola Rotibi, MWD's Principal Analyst for Software Delivery. Each podcast in the series will feature Bola and one or two guest commentators.

In this 33'06" podcast episode Bola discusses a series of questions focused on the issue of making the right technology choices. Her guests are Alan Zeichick (Editorial Director at SD Times) and Clive Howard (Founding Partner of Howard/Baines, a web development consultancy).

In an environment where software is everywhere and increasingly business critical, but where new technologies and approaches appear on the horizon at an alarming rate - when organisations look to carry out projects, are the right technology choices being made, and if not, why not? And who's to blame? What can organisations do to help them make better technology choices?

You can download the audio here or alternatively you can subscribe to the podcast feed to make sure you catch this and all future podcasts!

As with all the episodes in this podcast series, we've also published a companion report which summarises the discussion and "key takeaways". You can find it here, and it's free to download for all MWD's Guest Pass research subscribers (joining is free).

Labels: , ,


Burn this feed
Burn this feed!

Creative Commons License
This work is licensed under a Creative Commons License.

Blog home

Previous posts

Normal service will be resumed shortly
Links for 2009-07-02 [del.icio.us]
Seven elements of Cloud value: public vs private
The seven elements of Cloud computing's value
Links for 2009-06-09 [del.icio.us]
Links for 2009-06-02 [del.icio.us]
Links for 2009-05-27 [del.icio.us]
Links for 2009-05-20 [del.icio.us]
Micro Focus gobbles Borland, Compuware assets
Links for 2009-05-05 [del.icio.us]

Blog archive

March 2005
April 2005
May 2005
June 2005
July 2005
August 2005
September 2005
October 2005
November 2005
December 2005
January 2006
February 2006
March 2006
April 2006
May 2006
June 2006
July 2006
August 2006
September 2006
October 2006
November 2006
December 2006
January 2007
February 2007
March 2007
April 2007
May 2007
June 2007
July 2007
August 2007
September 2007
October 2007
November 2007
December 2007
January 2008
February 2008
March 2008
April 2008
May 2008
June 2008
July 2008
August 2008
September 2008
October 2008
November 2008
December 2008
January 2009
February 2009
March 2009
April 2009
May 2009
June 2009
July 2009

Blogroll

Andrew McAfee
Andy Updegrove
Bob Sutor
Dare Obasanjo
Dave Orchard
Digital Identity
Don Box
Fred Chong's WebBlog
Inside Architecture
Irving Wladawsky-Berger
James Governor
Jon Udell
Kim Cameron
Nicholas Carr
Planet Identity
Radovan Janecek
Sandy Kemsley
Service Architecture - SOA
Todd Biske: Outside the Box

Powered by Blogger

Weblog Commenting and Trackback by HaloScan.com

Enter your email address to subscribe to updates:

Delivered by FeedBurner