Historically, one of the key roles for IT in most business has lain in its ability to improve productivity and to innovate services that drive the strategic end game in areas like efficiency and competitive edge.
This has often seen the IT department cast as a workhorse for helping streamline areas like procurement, accounting, and sales; for driving down communication costs; and for providing the framework and the reporting tools to help make major strategic decisions.
To some extent this still, of course, remains the case. But the last decade has seen a shift in businesses' expectations to the point where, today, much of IT's traditional value-add is taken for granted and where IT is asked – in fact expected – to deliver a great deal more.
With even middleweight organisations now routinely relying on hundreds of different applications and communications tools just to keep running, business leaders want more processing power, more applications, more data – in fact, more of everything. And they want it now.
As a result, the average datacentre (or backroom with the air conditioning turned way up!) is bursting at the seams with servers, appliances, networking, and all the other paraphernalia needed simply to get application services from point A to B.
This in turn is leading to the increasingly familiar, widespread, and frustrating datacentre condition known as "infrastructure sprawl"; organisations having, for years, kept pace with ramping demand and data volumes by adding server upon server and storage device upon storage device, and having ended up with vital resources entirely bound-up in countless disparate technology silos.
Help is at hand to help firms to get a handle on the problem – two of the most notable and visible trends to have emerged in recent years being consolidation and virtualisation – technologies that allow you to run multiple independent devices on a single box; and allow you to seamlessly share resources like CPU power, RAM, and storage at speed whilst retaining vital individual attributes.
As powerful as they can be however, such technologies are of limited value if used in isolation. Indeed left to their own, virtual, devices, they can create as much sprawl as they eradicate – sometimes more – a species of sprawl that is often even more difficult to rein in because of the very absence of physical hardware.
Unchecked, virtualisation can also place even more strain on storage and networking infrastructures that may already be struggling under the pressure of spiking data capacity and performance demand.
Quite clearly then, a modern, slimmed down datacentre demands modern slimmed down infrastructure. And that means Blade servers.
For the uninitiated, a "blade" is quite simply a self-contained, slimline server, which – packed in neatly and densely alongside other similar blades – sits in the enclosure, which then powers, cools and provides connectivity and management to the whole blade array. Each blade essentially contains only core processing elements, making the whole rack hot-swappable and easier to upgrade and manage.
Also, because a blade occupies a significantly smaller physical footprint than a traditional rack-mount server – typically about the size of a pizza box – blade-driven datacentres pack a whole lot more processing punch per square inch.
There's more to blade computing than size, looks, and muscle however. They consume less power too. Much less.
Be warned though. While the benefits the blade-server core are clear, it is by no means a panacea; a magic wand to spirit away your sprawling IT infrastructure at a single stroke. All the physical cabling, space, power, and waste reduction in the world won't cure the IT department mired in inefficiency and set on cutting corners.
Migrating to a blade architecture is the perfect excuse to stop, look, and take stock of your IT infrastructure as a whole; to introduce some improved deployment and management policies; and to generally sharpen up all round.
Subscribe to Insight Blog