Do you really have to slash and burn to upgrade your storage?
Software defining the future
Whether it’s new storage architectures, software defined networking (SDN) or cloud computing, the assumption is you start with bucket-loads of cash and either a slash 'n' burn approach to your existing set-up or develop a green field site into which you can install the latest all-singing, all-dancing technology.
But what if this is actually a misconception? What if it is possible to implement SDN or Flash storage gradually, perhaps as an overlay on an existing infrastructure, and in smaller organisations?
According to Tony Lock, programme director with market research company Freeform Dynamics, it has to be. “Very few organisations are going to throw out all their old equipment and replace it – I don't think you're going to see much of that,” he says. Yet that embeds inefficiency, both in resource and time usage, especially on the storage side where new applications have been added in silos.
“Managing lots of silos is very time-consuming, plus you have a lot of unused storage,” he says. “The problem is that IT generalists are rushed off their feet, so they don't have time to learn better ways of doing things. They need the vendors and more usually the channel partners to advise that they don't have to rip and replace, and that they can move forward with what they’ve got. And then show them how.
“Compared with putting stuff into the cloud, SMBs are much more interested in making use of what's already in-house. What most organisations need is sensible advice – here's where you are, here are your options, here's how to move forward. But do the resellers know the options? I suspect not – the biggest problem is the channel. It's a huge educational job.”
For storage, one of the key technologies to look at is storage virtualisation. This could use software such as DataCore's SANsymphony or a hardware gateway such as IBM's SVC, for example. The concepts are the same: the available physical storage is aggregated into pools of blocks and the controller then draws on these pools to construct new logical or virtual storage units that can be assigned to servers and so on.
As well as the ability to construct virtual volumes of any size, or indeed of variable size (for thin provisioning), the controller can also perform tasks such as mirroring, replicating or tiering your storage volume below the surface, transparently to the app. In addition, because it aggregates and pools existing storage systems, it also pools free space. This means there is no need to leave empty, wasted space on every physical system just in case that application needs it for growth.
This kind of technology has been around for many years but when it emerged the market wasn't really ready for it. It was a solution in search of a problem. In effect it is software-defined storage. You can see analogies with SDN, both in their development and in the way that what's finally making them practicable for the mass market is the increasingly complex demands put upon IT and the development of advanced automation and orchestration technologies that need these highly flexible underpinnings.
Those automation and orchestration techniques can also make it easier to add new technology alongside an existing infrastructure without going the whole hog towards infrastructure virtualisation, says Jay Prassl, VP of marketing at Flash storage developer SolidFire. He argues that by enabling users to self-serve, systems such as SolidFire can be used to deploy new storage services without incrementally increasing the admin workload.
Hybridising the network
As with storage virtualisation, not everyone needs or can use SDN. In addition, the immaturity of SDN technology means we do not really know what the benefits will be. Indeed, the message from early adopters and vendors alike is that the cost savings are overhyped and that the real wins are more likely to be operational.
"SDN is the most mature of the SD movements, but it is still pretty immature, and there’s not a lot of people out there doing it," says Ovum analyst Roy Illsley. "SDN has got great benefits, but it's causing people to ask how on earth to do it. That's why a green-field is easier. The technology looks good, but it's not mature enough and the smallest deployment would be your smallest site.”
He adds: "SDN has all these positives, but we won't find out what it's really capable of – or what the drawbacks are – until we are actively using it. We are at a tipping point now though, with all sorts of systems going from physical to virtual.”
Not everyone will need SDN though, says David Noguer Bau, head of service provider marketing at Juniper Networks EMEA. He thinks it’s good for people who make a lot of changes or need agility in their networks, or who need network segmentation, or anyone building a multi-tenanted architecture. "Small companies tend not to play with their networks, which leaves large enterprises. Although it could also be a 10-person service provider, say, or a software developer that needs test environments set up quickly,” he says. The advantage is automation. It brings network operations to the same level as virtualised computing.
"Then it's about the talent they have – you need people with both IT and IP knowledge to help with the selection of tools and so on, and those people can be hard to find. With SDN, the network and IT managers need to agree on what the network needs and what the applications need. They have to sit together in this world, with a deeper level of agreement and understanding. In a way, the [SDN] controller becomes a gateway between the IT and networking departments."
This could also give SMBs an advantage over their larger and slower brethren, particularly if their size and relative agility makes it easier for them to absorb the network-as-a-service concepts and methodologies inherent to SDN. The same goes for SMB networking and IT staff – they will often be closely integrated and will sometimes even be the same people, making them ideally placed to bring these two otherwise warring worlds together.
It could also be an opportunity for system integrators to put together a datacentre-in-a-rack, with SDN and storage virtualisation providing the automation and orchestration frameworks alongside the rather better known hypervisor-level tools for virtual machine migration and suchlike. “The type of organisations we deal with prefer a more integrated approach. They won't contact vendors directly. It could be an opportunity for system integrators to package it all up,” says Noguer Bau.
“There are several approaches - we have a controller model with our Contrail available licensed or as open source [OpenContrail]. Normally you'd package it with the OpenStack orchestrator. The advantage is that the customer might already have switches with VLANs and so on, so now they gradually install OpenStack, integrate it with Contrail, and it will create an overlay.
Sponsored: Hyper-scale data management