Will tech titans SWALLOW upstart where Apple guru Steve Wozniak works?

Storage blogger gives his take on Primary Data

Woz with Apple II, photo: Gavin Clarke

Comment I spent just an hour with data centre virtualisation upstart Primary Data (set up by Fusion-io's founders) during an IT press tour, but I came away both intrigued and puzzled at the same time.

While it has the best vision and ideas I’ve seen in years, at the same time the two-year-old firm's vision seems incredibly hard to achieve. Everything is so immature that only faith, boatloads of money, and the perfect strategy (all together) will lead this company to success.

Thinking about Primary Data


Primary Data concept

With already $60m in funding, an A+ team of storage personalities, a lot of marketing (with all due respect, I can’t think any different when guys like Apple brain Steve Wozniak are involved), three R&D sites, out of stealth mode just one year after its inception, with a bunch of PoCs and a (probably totally immature) product out very soon.

There are only two possibilities here: either these guys know exactly what they’re doing, or they're totally out of their minds.

Thinking about Nicira

First things first, just speaking about marketing, Primary Data is very adamant that “this is not a software-defined product”. From my point of view it is, but it is so advanced (and better than anything we are used to) that talking about SDS will only diminish the value of the proposition.

The best way I know to describe Primary Data is to compare it with software-defined networking company Nicira. Nicira was acquired from VMware for $1.2bn a while ago, is a pioneer of SDN and its technology is now the basis of VMware NSX.

I’m going to borrow some of the basic concepts behind NSX to explain what Primary Data does, even if in most of the cases it is an over-simplification.

As with Nicira/NSX we have three major components: the protocol, the virtual switch, and the controller. Translating this in Primary Data terms we have: pNFS (comparable to Openflow), the hypervisor (the Open vSwitch), and the NSX controller (the Data director).

Also the functionalities are very similar:


All the storage virtualisation happens in the data hypervisor, which is installed in each single server. This is a very complex piece of software indeed; it is a kernel driver and also does all the protocol conversion (file/file, file/block, file/object). So, pNFS “encapsulates” the traffic and it is used for the transportation between data hypervisors (there is much more complexity here, but you should first understand how pNFS works to go deeper, and this is not the aim of this post).

The Data Director is a policy-based metadata controller, in charge of all data positioning, moving, and so on - as with Nicira, all the magic, the potential and the money is here.

Like Nicira and Openflow, if the storage vendor supports pNFS natively (and seriously), I’m sure there could be some magic here ... but at the moment it’s only speculation.

There are other important pieces in Primary Data architecture but, at the moment, I think this is enough to give the basic idea.

Sponsored: Becoming a Pragmatic Security Leader

Biting the hand that feeds IT © 1998–2019