advising on IT-business alignment
IT-business alignment about us blog our services articles & reports resources your profile exposure
blog
blog
Wednesday, October 04, 2006

Why feeds and speeds matter

I confess to being a bit turned off by functional discussions about the latest and greatest storage, server and networking technologies. It doesn’t help that they all look the same. No amount of chrome can disguise the fact that most rack-mounted equipment is about as interesting to look at as a row of shoeboxes: unfortunately, no vendor has yet taken me up on the idea of having the front panel LED’s performing a Mexican wave effect, up and down the data centre. Things don’t get much better inside the box: what used to be two ‘whatevers’ became four, then eight… thanks to Mr Moore no doubt, we can only think in doubles.

While it is important to vendors to keep up appearances and leap frog the competition, there are very few applications that can take full advantage of such jumps. In some environments, every CPU cycle matters and latency is the enemy of business value – highly transactional applications which don’t involve a high degree of human touch such as trading and payment processing systems (indeed, anything to do with money) for example, or scientific apps for manufacturing, automated testing and so on. In more mundane areas, such as inventory and line of business automation applications, the need for speed tends to be tempered by user interaction – we talk about negotiating response times in seconds, rather than sub-seconds.

All the same, it would be a mistake to think that hardware performance improvements are solely to the benefit of highly transactional areas of the business, nor should we think solely in terms of making or saving money. As the industry continues to focus on the joint booms of compliance and IT security, attention is turning to the underlying theme of both: that of managing risk. I was recently speaking to the CIO of a payment processing company, who told me that his organisation placed as much store on offering a guaranteed service to its customers, as it did on ensuring that the service was cost-effective. Not only did this mean that the company had redundancy built into every layer of its infrastructure; also, when it processed a batch of payments or transmitted them to its banking partner, it wanted to be sure that the process took the minimal time possible. The faster the processing, the less likely it was to fail.

More recently still, I was chatting to Craig Stouffer who is VP of worldwide marketing at Silver Peak, a vendor of wide-area network (WAN) optimisation appliances. In layman’s terms, these boxes sit at either end of a communications link, looking for opportunities to speed up the traffic through a combination of compression and packet-level caching of repeated data streams. One of the most interesting customer examples he gave, I thought, was how one customer had reduced its backup window from tens of hours to a matter of minutes.

I’m not interested in promoting the Silver Peak technology, particularly as I haven’t tried it (nor am I likely to, unless I set up a WAN test bed in my garage) and they are not the only vendor in this space, competing with companies such as Juniper (their throughput may be smaller, says Mike Banic, director of product management at Juniper, but they can offer global support, often important in a WAN situation). However, I do know that many organisations are still struggling with backups, with the resulting increase in business risk – not IT risk – caused by the potential for data loss. One of the causes is that the backup window issue is still an unsolved problem for many. As new technologies such as these come on stream, they offer opportunities to solve such problems and reduce the level of risk faced by the business.

The caveat of course, is that such technologies cannot be considered in isolation but as part of the overall IT architecture (for example, how well do Silver Peak or Juniper cope with the recovery window, post data loss?); equally, risk cannot be managed successfully in any term without best practice processes for risk management. All the same, if better, faster technologies enable the mitigation of risks that would previously have been impractical to mitigate, that is reason in itself to look at them.


Burn this feed
Burn this feed!

Creative Commons License
This work is licensed under a Creative Commons License.

Blog home

Previous posts

The OASIS SOA Reference Model: not just for propel...
New podcast episode: On SOA Governance
Finally, another podcast episode - more on Web 2.0
Sun acquires close to home to increase the velocit...
Identity management collaboration is more than talk
Identity management is making its mark
IBM's response to Microsoft's promise
Reasons to be cheerful in IT service management
Brocade buys McData in summer sale
Microsoft's open specification promise

Blog archive

March 2005
April 2005
May 2005
June 2005
July 2005
August 2005
September 2005
October 2005
November 2005
December 2005
January 2006
February 2006
March 2006
April 2006
May 2006
June 2006
July 2006
August 2006
September 2006
October 2006
November 2006
December 2006
January 2007
February 2007
March 2007
April 2007
May 2007
June 2007
July 2007
August 2007
September 2007
October 2007
November 2007
December 2007
January 2008
February 2008
March 2008
April 2008
May 2008
June 2008
July 2008
August 2008
September 2008
October 2008
November 2008
December 2008
January 2009
February 2009
March 2009
April 2009
May 2009
June 2009
July 2009

Blogroll

Andrew McAfee
Andy Updegrove
Bob Sutor
Dare Obasanjo
Dave Orchard
Digital Identity
Don Box
Fred Chong's WebBlog
Inside Architecture
Irving Wladawsky-Berger
James Governor
Jon Udell
Kim Cameron
Nicholas Carr
Planet Identity
Radovan Janecek
Sandy Kemsley
Service Architecture - SOA
Todd Biske: Outside the Box

Powered by Blogger

Weblog Commenting and Trackback by HaloScan.com

Enter your email address to subscribe to updates:

Delivered by FeedBurner