This article is more than 1 year old

Regarding the unification of block and file storage for the hybrid cloud

New data architectures for new workloads

Sponsored It wasn’t that long ago that much of the thinking around cloud was that enterprise instances would naturally gravitate towards the public cloud. In some extreme cases, commentators argued all workloads would go public, with on-premises computing vanishing entirely.

Of course, nothing like that has happened. The value of corporate data centres has held strong, although it’s apparent that the desire for hybrid cloud has not disappeared. According to Flexera, 45 per cent of enterprises favour some form of hybrid set-up - a sign of how prevalent hybrid is.

This has been fuelled by companies repatriating their data from public cloud to on-premises. IDC reports, 80 per cent of organisations repatriated workloads in 2019 and this trend is set to continue - enterprises expect to move at least 50 per cent of their public cloud instances in-house within the next couple of years.

There are several reasons for repatriating data and chief among them is the need for better security. In the past few years, there have been several high-profile data leaks from cloud providers and many organisations think that data is better protected within the corporate boundaries. Other reasons include better availability - in particular, improved latency; improved cost management - as cloud charges can sometimes spiral out of control; and greater control. The latter is reassuring for companies when they are responsible under national and sector rules.

However, not everything is going to move on-premises; there are still some huge advantages of moving workloads to the public cloud. And there is a growing trend for organisations to use a variety of providers on the public side. According to IDC, 88 per cent of organisations use at least two such providers. And many use more - the average number is 16 - often in complex deployments. IDC reckons 69 per cent have a definite multi-cloud strategy – a far cry from the days when firms were dipping their toes in the water and trying out a single cloud supplier.

Hybrid reality

Hybrid is clearly here and growing, and those in charge of their organisations’ data architectures must ensure their data is managed equally no matter that it’s in the cloud or on-premises. However, this is no small feat given the often inherent differences in the platforms involved and the distances that data must traverse. Data chiefs therefore need tools that allow them to manage both environments with a minimum of fuss while ensuring performance and availability.

Lenovo has released a storage solution that allows customers to effortlessly migrate to and manage a Hybrid Cloud solution. How? One of the important requirements when it comes to managing this hybrid environment is that those responsible for their data shouldn’t be tied to a single way of working – something dictated by one of their suppliers. They need a choice in the way they operate based on what’s best.

This is important when you want to move from data on-premises to storage in the cloud or bring storage back in-house. It’s imperative with hybrid cloud integration to have the flexibility to choose a solution that best suits the customer environment– setting different layers of policy according to factors such as security, backup, performance requirements and so on.

Change is a constant

There are two modes of hybrid cloud integration on the DM Series: S3 Object tiering, and Data replication to the cloud.

S3 Object tiering is designed for the back-up and archiving of data. It will allow customers to move cold data from high performing SSDs to lower cost object storage. With this feature, customers will have the flexibility to choose from any of the major cloud vendors (AWS, Azure, google, IBM and Alibaba cloud) and will not be locked into a contract with increasing object storage costs. Customers will also save costs, because they retain data reduction features that are on board the DM Series. This allows users to save on object storage cost as well as transportation costs.

The other mode is even more compelling; called Cloud Volumes ONTAP, this lets you extend your storage arrays virtually into a cloud provider. What this means in practice is that companies will be able to maintain the same protocols and data structure as if the data were on premises. It means applications will work transparently with data, no matter where it's stored.

This is particularly attractive for smaller businesses, which may lack the kinds of dedicated storage or cloud specialists found in the enterprise. It’s beneficial for these customers because deployment and maintenance are simple and it will allow them to reduce physical storage needs. With the ability to replicate data and applications to AWS, Azure or Google Cloud, customers do not need to deploy a second physical storage site and can reduce the need of a second storage site for data protection purposes.

For each of these cloud solutions customers retain core enterprise features that are seen on board the DM Series, such as data reduction and encryption. Data will stay compressed and encrypted on prem, over the wire and in the cloud meaning customers will reduce their physical and cloud storage footprint as well as reduce overall transportation costs to and from the cloud. The added benefit of data staying encrypted means that there is no point of vulnerability within these processes and no need for a third-party security solution. Overall this solution will allow customers to move to a blended OPEX and CAPEX model and at the same time reduce the cost for each.

Back to blocks and files

Block and file storage is a business staple and the ThinkSystem DM Series introduces a momentous change in this field, with management through a unified approach. This fits in with the way that the organisation’s storage architectures are changing. Organisations must manage different sets of arrays, which means that the task of maintaining different structures for block and file can become a major capital expense of investing in different management platforms.

Lenovo’s approach to unify management has been driven partly by emerging technology. Organisations have built their data strategies on decades of technology from well-established providers that may long pre-date the cloud. But business is changing: the Cloud Industry Forum reckons 75 per cent of companies are exploring some form of business transformation. Two of the major changes are the growing use of flash memory and emergence of NVMe, which can deliver the performance that the new generation of workloads demand.

With the DM Series, enterprises are in a position to embrace this change. They can manage separate arrays - block and file - and the new networking and storage platforms, such as flash.

Organisations must reassess their storage. They must consider whether cloud or on-premises is the right approach. And they must look more closely at tiering in the cloud and how that approach will transform the way their consumption of cloud is architected and managed. Above all, they will have to look at new ways to manage this hybrid infrastructure effectively - breaking down the barriers between cloud and on-premise or between block and file.

The challenge is to find a modern way of handling this storage. Flexibility is the name of the game – and any vendor that can’t offer that will mean you, the user, gets left behind.

Sponsored by Lenovo

More about

TIP US OFF

Send us news