Feeds

The God Box: Searching for the holy grail array

Latency killing super spinner

Secure remote control for conventional and virtual desktops

It's so near we can almost smell it: the Holy Grail storage array combining server data location, solid state hardware speed, memory access speed virtualisation, and the capacity, sharability and protection capabilities of networked arrays. It's NAND, DAS, SAN and NAS combined; the God storage box – conceivable but not yet built.

We can put the Lego-like blocks together in our minds. The God box building blocks are virtualised servers; PCIe flash; flash-enhanced and capacity-centric SAN and NAS arrays and their controller software; atomic writes; flash memory arrays; and data placement software. The key missing pieces are are high-speed (PCIe-class) server-array interconnects and atomic writes - direct memory to NAND I/O.

The evil every storage hardware vendor is fighting is latency. Applications want to read and write data instantly. The next CPU cycle is here and the app wants to use it and not wait for I/O. Servers are becoming super-charged CPU cycle factories, and data access I/O latency is like sets of traffic lights on an inter-state highway: they just should not be there.

Killing latency

I/O latency comes from three places broadly speaking: disk seek times, network transit time, and operating system (O/S) I/O subsystem overhead. The disk seek time problem has been cracked; we are transitioning to use NAND flash instead of spinning disk for primary data, the hot, the active data. Disk remains as the obviously most effective large-scale media for data, particularly if it is deduplicated. Flash cannot touch it.

There have been four ways of doing this:

  • We are seeing SSDs slotted into hard disk drive (HDD) slots, with data placement software, like FAST VP, automatically moving data between HDD and SSD as its 'access temperature' rises and falls.
  • We are also seeing flash used as an array controller cache, with NetApp's FlashCache and EMC's FAST CACHE.
  • We are seeing newly architected flash and HDD arrays which do a better job, they say, of using flash storage and HDD capacity together. Think NexGen Storage, Nimble Storage; and Tintri.
  • We are seeing all-flash arrays which abandon disks altogether and rely on deduplication, MLC flash and flash-focused, not HDD-focused controller software, in order to bring perGB cost close to that of disk drive arrays. Think Nimbus, WhipTail, Violin Memory, and startups like Pure Storage, ExtremIO and SolidFire.

The big "but" with these four approaches is that network latency still exists – as does the I/O latency from the O/S running the apps. These four approaches only go part of the way on the journey to the God Box.

Storage and servers – come together

Network latency is vanquished by putting the storage in the server or the server in the storage. Putting HDD storage in the server, the direct-attach storage (DAS) route gets rid of network latency but disk latency is still present. We'll reject that. Disks are just ... so yesterday, and it has to be solid state storage.

There are two approaches to server flash right now: use the flash as a cache or a storage. PCIe flash caches are two a penny: think EMC VFCache (the latest), Micron, OCZ, TMS, Virident and others. You need software to link the cache to the app and you need a networked array to feed the cache with data. This is only a halfway house again because cache mises are expensive in latency terms.

If it's a read cache then its a "quarterway" house, as writes are not cached. If it doesn't work with server clusters, high-availability, vMotion and/or and server failover then it's an "eighthway" house. Most of these issues can be fixed but there is no way a cache can guarantee cache misses won't happen; it's the nature of caching. No matter that caches connected to back-end arrays can offer enterprise-class data protection; the name of the game is latency-killing and caching doesn't permanently slay the many-headed latency hydra. So the flash has to be storage.

Fusion-io is the leading exponent of putting flash as storage into servers. What about putting servers in storage? DataDirect says it does that already with filesystem applications hosted in its arrays. Okay, we'll grant the principle but not the actuality as non-one is running serious business applications in DDN arrays yet.

EMC is saying that virtualised server apps will be vMotioned to server engines in its VMAX, VNX and Isilon arrays. Okay. This means an exit of network latency and, if the arrays are flash-based with flash-aware controllers and not bodged disk-controller SW, then an exit of drive array latency.

EMC is serious and vocal about this approach so we must pay it heed. And we must note that the flash storage tier can be backed up with massive HDD array capacity and protection features. This is a very attractive potential mix of features, although only for servers in the array - I'm hinting at server supply lock-in here - and only if it becomes mainstream, and if it can get rid of the server O/S I/O subsystem latency.

Secure remote control for conventional and virtual desktops

Next page: Fusion's new stake

More from The Register

next story
Ellison: Sparc M7 is Oracle's most important silicon EVER
'Acceleration engines' key to performance, security, Larry says
Linux? Bah! Red Hat has its eye on the CLOUD – and it wants to own it
CEO says it will be 'undisputed leader' in enterprise cloud tech
Oracle SHELLSHOCKER - data titan lists unpatchables
Database kingpin lists 32 products that can't be patched (yet) as GNU fixes second vuln
Ello? ello? ello?: Facebook challenger in DDoS KNOCKOUT
Gets back up again after half an hour though
Hey, what's a STORAGE company doing working on Internet-of-Cars?
Boo - it's not a terabyte car, it's just predictive maintenance and that
prev story

Whitepapers

A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Beginner's guide to SSL certificates
De-mystify the technology involved and give you the information you need to make the best decision when considering your online security options.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.