Feeds

Red Hat to mash up KVM hypervisor and Gluster file system

Shadowman, Chipzilla kick in dough to 10gen for MongoDB stake

Intelligent flash storage arrays

Red Hat bought its way into server virtualization by acquiring Qumranet and gave the world the KVM hypervisor, commercialized as Red Hat Enterprise Virtualization. Several years later, it bought its way into clustered file systems by eating Gluster and commercializing its eponymous file system as Red Hat Storage Server. And now the company is mashing them up so they can run side-by-side on the same clusters, uniting compute and storage on commodity boxes.

Red Hat hosted a webcast on Wednesday to talk about the momentum behind the open source Gluster and Red Hat Storage Server 2.0, the latter of which launched in June of this yearand started shipping in July. On the webcast, Ranga Rangachari, general manager of Red Hat's storage business unit, said that the Gluster has over 160,000 downloads and that the community of developers and users of the software (measured by people, not installs) was growing at 160 per cent in the year sinceShadowman shelled out $136m to acquire Gluster.

Rangachari added that the company now has over 100 proofs of concept up and running, and that it is working on getting around 30 channel partners up to speed on peddling the RHSS product as an alternative to other clustered file systems and disk arrays.

That, however, would seem to be a tall order, and will be much more difficult than getting the key IT vendors to supports its Enterprise Linux operating system or Enterprise Virtualization hypervisor. The reason is simple enough: the key server makers who push these two key Red Hat products have their own storage businesses to protect.

Getting whitebox king Super Micro to sell the commercialized Gluster clustered file system is easy enough, and so is getting Synnex, which among other things makes the Open Compute servers used by Facebook in its data centers.

Sirius Computer Solutions and Mainline Information Systems, however, are two big IBM server resellers, so it is a bit surprising to see them peddling RHSS, and that HP is on the list of 30 partners getting ready to peddle is surprising until you realize how desperately HP Software is to boost sales and profits.

The other companies cited by Red Hat are smaller and probably not known to most of us: CityTech, Groupware Technology, Carasoft, International Integrated Solutions, HighVail, DLT Solutions, GC Micro, Software By Design, ShadowSoft, Sigma Solutions, Unilogik Systems, and Abtech Systems were on the short list.

That's not a slam on them or Gluster. It is just that with around two-thirds of Red Hat's revenues driven by channel partners, it's hard to imagine IBM, Dell, HP, Fujitsu, and the other key server players that have a strong desire to sell storage hardware and software enthusiastically embracing RHSS, except in those cases where customers demand it.

That could well happen, of course, despite some of the performance issues that customers have groused about with the Gluster file system, which runs on x86 servers – with or without RAID controllers – and on top of ext3, ext4, XFS, and other file systems on each server node.

Simply put, GlusterFS takes all of those individual file systems running on server nodes in a cluster and exposes the file system as a single global namespace that you can mount as either NFS or CIFS. With the 2.0 release, Red Hat wove in the "Swift" object file system APIs from the OpenStack cloud control freak, so it can now speak in objects as well as in files.

RHSS is equipped with the same oVirt virtualization management console that is used for RHEV, which is in tech preview in the 2.0 release as is the ability to pipe RHSS into the Hadoop Distributed File System (HDFS) or entirely replace HDFS with RHSS.

Rangachari said on the webcast that RHSS 2.0 integration with RHEV 3.1 had just entered beta testing; no word on when that will be product grade or precisely what "integration" will mean. But longer term, Rangachari said that Red Hat was working to make RHSS and RHEV could run on the same clusters at the same time, with virtual machine containers for compute and storage and some of the processing capacity used to drive the RHSS file system. Right now, you have to run virtualized compute on one cluster and the file system on an entirely different cluster, with a network linking them.

If Hadoop has taught us anything, it is that getting compute and storage on the same physical devices can substantially boost performance.

RHSS is available out in Amazon's EC2 compute cloud as well, and the interesting thing about that is you can use GlusterFS (what's wrong with that name?) as an overlay atop Amazon's Elastic Block Storage to provide some scalability and resilience across those EBS instances.

No word on what performance penalty this brings to EBS, or what it costs to do this compared to running RHSS on internal clusters. The important thing as far as Shadowman is concerned is that if you run RHSS out there on Amazon's EBS and do the same thing in your own data center, you can move data back and forth between the two.

In a separate – but possibly in the long run related – announcement, Red Hat today invested an undisclosed sum into 10gen, the creator of the MongoDB NoSQL data store. Mongo closed its Series E funding round back in May for $42m, bringing its haul over the past three years to $73.4m. The company got some money in September from In-Q-Tel, the investment arm of the CIA, and now Red Hat and Intel are giving it some more money – although they would not say how much.

It is rare for a company to have received so many rounds of funding before either going public or being eaten, but 10gen is at a unique place in the ramp of Hadoop and other big data tools and companies are eager to invest.

Intel almost certainly does not want to buy 10gen, but Red Hat might – and it could be that an investment was the only way to get a look at the 10gen books.

Moreover, Red Hat is supporting MongoDB on its OpenShift platform cloud, so it needs to keep strong ties with its partners. With the kind of multiples that a big-data player could command out there, 10gen would come at a pretty hefty price – easily several hundred million dollars, perhaps more, depending on the insanity and the revenue and profit numbers 10gen can show.

It is not hard to imagine Larry Ellison using the Oracle checkbook to be a spoiler here. ®

Beginner's guide to SSL certificates

More from The Register

next story
The cloud that goes puff: Seagate Central home NAS woes
4TB of home storage is great, until you wake up to a dead device
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
You think the CLOUD's insecure? It's BETTER than UK.GOV's DATA CENTRES
We don't even know where some of them ARE – Maude
Intel offers ingenious piece of 10TB 3D NAND chippery
The race for next generation flash capacity now on
Want to STUFF Facebook with blatant ADVERTISING? Fine! But you must PAY
Pony up or push off, Zuck tells social marketeers
Oi, Europe! Tell US feds to GTFO of our servers, say Microsoft and pals
By writing a really angry letter about how it's harming our cloud business, ta
SAVE ME, NASA system builder, from my DEAD WORKSTATION
Anal-retentive hardware nerd in paws-on workstation crisis
prev story

Whitepapers

Choosing cloud Backup services
Demystify how you can address your data protection needs in your small- to medium-sized business and select the best online backup service to meet your needs.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Website security in corporate America
Find out how you rank among other IT managers testing your website's vulnerabilities.
Intelligent flash storage arrays
Tegile Intelligent Storage Arrays with IntelliFlash helps IT boost storage utilization and effciency while delivering unmatched storage savings and performance.