This article is more than 1 year old

Big Data, OpenStack and object storage: Size matters, people

Consider your needs before rushing out and investing in new storage tech

COMMENT I’m talking about Big Data, OpenStack and object storage. In the last two days I’ve come across a couple of articles (here and here), which discuss the adoption of these technologies.

Both articles start from surveys that talk about scarce adoption in the enterprise space for these technologies. When talking about them, two questions arise time after time. Is it too soon? And is real interest lacking in traditional enterprises? The short answer is 'yes' to both questions, but I think some elaboration is necessary.

Do you have the problem?

These technologies are all designed to solve problems on a big scale:

  • Object storage = storing huge amounts of unstructured data
  • Big Data (analytics) = analysing huge amounts of data
  • OpenStack (and cloud management platforms in general) = managing huge pools of compute, networking and storage resources

The common word here is HUGE. Otherwise, it’s like shooting sparrows with bazookas.

Yes, you could be interested. And yes, you could have them in a lab to better understand what they do and how you could leverage them. But at the end of the day, you’ll stick with your traditional infrastructure: your NAS for unstructured data, SQL DBs for analytics, and VMware or Microsoft virtualisation stacks with some fancy automation and provisioning tools.

That’s all you need (today), and this is why most of OpenStack and Big Data analytics infrastructures are still PoCs (proof if concepts) in the labs.

It’s just too soon

In some cases data (and infrastructure) growth is quickly heading towards that HUGE mode mentioned above. It’s just a matter of time and if you don’t want to outgrow your IT team, you’ll be looking at these technologies in the not-very-distant future.

In fact, if you want to manage petabytes instead of terabytes, or thousands of VMs instead of hundreds per person, you will need something other than what you are used to.

At the same time, if your organisation is not experiencing an exponential growth in terms of data and compute needs, but the trend is more linear, any new hardware generation will probably suffice to avoid structural changes.

Furthermore, incremental updates to legacy technologies (like adding in-memory capabilities to a traditional RDBMS) can give some extra juice, and will still be cheaper to implement than starting from scratch with next-generation technology and the investment needed to train people within your organisation.

But you're probably already using them

On the other hand, most of us (both consumers and enterprises) are already using the technologies mentioned in this article. Actually, many modern solutions we are adopting in our organisations are based on these technologies.

Take object storage as an example. Somewhere in your organisation there is a sync & share solution, a cloud storage gateway of some sort, back-ups are being sent to the cloud or something else. In all these situations, even when the front-end is installed locally, you are already leveraging object storage at the back-end.

It’s likely that if you sum together all of the data managed by these applications, it would still be cheaper to buy a service instead of building a new on-premises infrastructure. But have you ever checked it out?

In the adoption charts of the surveys mentioned at the beginning of the article, all those (numerous) companies, accessing the same object storage platform provided by a single service provider are counted as one – even if the SP has a multi-petabyte installation serving thousands of tenants.

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like