This article is more than 1 year old

Block storage is dead, says ex-HP and Supermicro data bigwig

Recent consolidations show ‘distinct lack of imagination’

Interview Block storage is dead, object storage can be faster than file storage, and storage-class memory will be the only local storage on a server. So said Robert Novak ... but who is he?

Novak was until recently a Distinguished Technologist in the HP servers' Hyperscale Business Unit and has had an interesting employment history, being Director of Systems Architecture at Nexenta Systems, April 2012 – December 2014, introducing NexentaEdge at VMworld 2014, which was a scale-out storage architecture that provides a global name space across a cluster of clusters while offering global inline deduplication, dynamic load balancing, capacity balancing and more.

Before that he was Director of Enterprise Servers at Supermicro, July 2007 – April 2012. His CV also includes an eight-year stint at Sun, two years at MIPS and six years at Pyramid. Novak also wrote "Software Defined Data Centers for Dummies" which was published last year.

He's involved in searching for funding for a startup he's been working on since July, and has filed two patents for some new ways of managing object storage.

We asked Robert a series of questions to explore his thinking and hope you'll be interested in, intrigued by, and enjoy reading the answers.

SWD_datacenter_for_dummies

El Reg Robert, why is block storage dead and what has that to do with Hollerith cards?

Robert Novak I have been working in the storage industry for a very long time. I used to teach second-year computer science students about the Unix File System and how it used inodes (now called metadata) to track where a file was placed on the blocks of a disk drive.

In a recent piece of work on looking at new file systems, I started my research by collecting every book I could find in print on storage and file systems.

In each book they begin with a description of the "Unit Record Device". Very few of your readers are old enough to remember using them, but in the heyday of the IBM mainframe it was known as the 80 column punched card. This card was actually a revamping of an older technology known as the Hollerith card which had its roots in punched railway tickets.

Hollerith_card

Hollerith punch card

The "unit record" was too small to keep as separate records on storage devices (even for tape) so that the unit records were collected into groups of records called "blocks". So why is this relevant? Well it has to do with the first large application of the Hollerith card. In 1890 the US Census bureau coded all of the census data onto Hollerith cards and then used sorting machines to tabulate and sort the data.

That is why I contend that the "block" storage that we use for computers is 125 years old.

El Reg Is object storage based in underlying file storage and how did that come about?

Robert Novak Most object stores started their life by storing objects as a collection of files. Some object stores actually manage objects directly on top of blocks in their own file system, but most of them are built on top of file storage and use separate spaces in the file storage to separate the metadata (name of object, date of creation, owner, etc.) from the data (picture, video, document). This layering is illustrated here:

Object_store_layering

Object Store layering

El Reg How will key/value storage and direct disk addressing improve that?

Robert Novak Let's talk about key/value storage first. In 2013, Seagate announced its plans to build Key/Value storage devices, the "Kinetic" drive, and actually started to ship those drives one year later in 2014.

With these drives, you don’t need to know anything about the size of the drive, the size of the blocks of storage on the drive, or where on the drive the data is actually stored.

All you need to know is the "key" (up to 4096 bits in the Kinetic model). This is politically incorrect and no disrespect is intended, but I sometimes refer to it as the Chinese Laundry model of storage. You take your clothing to the Chinese Laundry store and drop it off for cleaning. The proprietor gives you a ticket with a number on the ticket.

A few days later you return to the laundry to reclaim your clothing (value), but you forgot your laundry ticket (key). The proprietor says, "No tickee, no laundry".

Key/Value drives work in a similar fashion. However instead of the proprietor giving you a ticket (key), you create your own key for the data that must be globally unique.

The difference that this makes is that the host server knows nothing about WHERE on the device the data is stored. It does not build any dependencies on the data the way other file systems did. This type of dependency is what led to the problem of Block Pointer Rewrite that impedes the adoption of Shingled Magnetic Storage for so many file systems.

There is no "address" of the data in a Key/Value drive. The "address" of the drive is the one (or more) IP addresses that are assigned to the drive. However, using the right broadcast or multicast techniques, you don't even need to know the address of the drive. We will return to that later. Another way to put this is that Key/Value represents a form of delayed binding.

More about

TIP US OFF

Send us news


Other stories you might like