Why feeds and speeds matter
I confess to being a bit turned off by functional discussions about the latest and greatest storage, server and networking technologies. It doesn’t help that they all look the same. No amount of chrome can disguise the fact that most rack-mounted equipment is about as interesting to look at as a row of shoeboxes: unfortunately, no vendor has yet taken me up on the idea of having the front panel LED’s performing a Mexican wave effect, up and down the data centre. Things don’t get much better inside the box: what used to be two ‘whatevers’ became four, then eight… thanks to Mr Moore no doubt, we can only think in doubles.
While it is important to vendors to keep up appearances and leap frog the competition, there are very few applications that can take full advantage of such jumps. In some environments, every CPU cycle matters and latency is the enemy of business value – highly transactional applications which don’t involve a high degree of human touch such as trading and payment processing systems (indeed, anything to do with money) for example, or scientific apps for manufacturing, automated testing and so on. In more mundane areas, such as inventory and line of business automation applications, the need for speed tends to be tempered by user interaction – we talk about negotiating response times in seconds, rather than sub-seconds.
All the same, it would be a mistake to think that hardware performance improvements are solely to the benefit of highly transactional areas of the business, nor should we think solely in terms of making or saving money. As the industry continues to focus on the joint booms of compliance and IT security, attention is turning to the underlying theme of both: that of managing risk. I was recently speaking to the CIO of a payment processing company, who told me that his organisation placed as much store on offering a guaranteed service to its customers, as it did on ensuring that the service was cost-effective. Not only did this mean that the company had redundancy built into every layer of its infrastructure; also, when it processed a batch of payments or transmitted them to its banking partner, it wanted to be sure that the process took the minimal time possible. The faster the processing, the less likely it was to fail.
More recently still, I was chatting to Craig Stouffer who is VP of worldwide marketing at Silver Peak, a vendor of wide-area network (WAN) optimisation appliances. In layman’s terms, these boxes sit at either end of a communications link, looking for opportunities to speed up the traffic through a combination of compression and packet-level caching of repeated data streams. One of the most interesting customer examples he gave, I thought, was how one customer had reduced its backup window from tens of hours to a matter of minutes.
I’m not interested in promoting the Silver Peak technology, particularly as I haven’t tried it (nor am I likely to, unless I set up a WAN test bed in my garage) and they are not the only vendor in this space, competing with companies such as Juniper (their throughput may be smaller, says Mike Banic, director of product management at Juniper, but they can offer global support, often important in a WAN situation). However, I do know that many organisations are still struggling with backups, with the resulting increase in business risk – not IT risk – caused by the potential for data loss. One of the causes is that the backup window issue is still an unsolved problem for many. As new technologies such as these come on stream, they offer opportunities to solve such problems and reduce the level of risk faced by the business.
The caveat of course, is that such technologies cannot be considered in isolation but as part of the overall IT architecture (for example, how well do Silver Peak or Juniper cope with the recovery window, post data loss?); equally, risk cannot be managed successfully in any term without best practice processes for risk management. All the same, if better, faster technologies enable the mitigation of risks that would previously have been impractical to mitigate, that is reason in itself to look at them.