Cry havoc and let slip the SSD dogs of war
Meet a disruptive technology - for storage vendors
SNW Fusion-io has a storage technology that, if it succeeds, could wreak havoc amongst the business models of mid-range drive array storage vendors.
The thinking goes like this: Fusion-io makes a solid state drive (SSD) that connects to a server's PCIe bus. It doesn't use traditional storage interfaces such as Fibre Channel (FC), SAS or SATA. The aim is to close the yawning gap that has opened up between increasing processor speeds and static drive array access time, so the SSDs are not connected by a slow storage network interface but hooked directly into a server's PCIe bus structure.
Apparently no other supplier has PCIe-connecting SSD controller technology. STEC has Fibre Channel (FC) interfacing SSD controllers and other suppliers, such as Intel or Samsung use SATA.
Fusion-io's chief technology officer, David Flynn, says there have been three methods used to bridge the gap. Firstly, suppliers threw extra RAM into the servers but there is never enough. Secondly they threw lots of drive spindles at the problem and, thirdly, they created complex and clever software to make the multiple spindles perform as fast as they could and still utilise disks as much as possible.
This storage software infrastructure is what constitutes the bulk of the value in mid-range storage arrays, the ones with two controllers and internal FC networks connecting the potentially hundreds of drives to the controllers. It's this software that provides much if not the bulk of the added value above and beyond the commodity spinning disk drives that is charged for by Compellent, Dell (EqualLogic), EMC (Clariion), HDS, HP (EVA, LeftHand), IBM, Isilon, NetApp, Pillar, Sun, Xiotech, and others.
Flynn points out that the core interconnect of most these supplier's controllers is PCIe. The array's external FC interface is bridged to PCIe by an HBA and then bridged out to FCAL by another adapter. It would be simpler, he thinks, for them to add PCIe-connected SSDs than FC or SATA-connected SSDs. They would then avoid the FCAL (FC Arbitrated Loop) bottleneck involved in plugging FC-interfacing SSDs in to their arrays.
So why don't they do it? Two reasons, and here I'm not repeating what he said as he can't speak for them. This is my interpretation of what he said plus various implications I picked up on. So...
A PCIe-connected SSD does not pretend to be a disk drive whereas a FC or SATA-connected one does. If a storage array manufacturer plugs a FC-connnected SSD into a FC HDD bay then, in one sense, nothing changes. The SSD is just a faster disk drive and all the array supplier's added value infrastructure software still works and is justified.
If an array manufacturer plugs in PCIe-connected SSDs then they are not spoof HDDs and the infrastructure needed to turn slow HDDs into a fast array is not needed. Follow the logic. If its not needed then it can't be charged for. If the array price thereby falls then the storage supplier's gross margin falls and less profit is made. So where is the incentive to do it?
Flynn says this is the classic innovator's dilemma. The server manufacturers on the other hand are not conflicted about SSDs. They simply make their servers go faster and the end users are delighted; their applications run faster. It doesn't destroy the server manufacturers' business model at all. They can charge more for such servers.
So Fusion-io is working closely with Dell (one of its investors), HP and IBM, and we can reasonably expect that these three manufacturers will introduce products fusing commodity servers and Fusion-io's SSDs next year. They will be used to store transaction data, up to 1.28TB of it per Fusion-io SSD card, and not just the boot images and configuration data that are to be found on the Samsung SATA interface SSDs used in HP's blade servers.
These PCIe SSD-adopting servers will be able to run, Flynn says, 98 per cent of the world's databases in memory and so do for commodity servers what expensive TMS RamSans do for high-end servers.
These servers will have direct-attached SSD storage and not use networked storage. In Flynn's view drive arrays will gradually revert to bulk data stores with all active data residing on silicon storage.
The implication is that putting SSDs into drive arrays as quasi-very fast hard drives is a stop-gap and not the way to go. The true SSD way is adding the stuff directly to server/controller PCIe busses. That's the Fusion-io pitch and if Flynn is right then all the pure storage array manufacturers face a terrible problem. How can they adopt this disruptive technology without shrinking their businesses?
The hints coming out of various places are that storage array manufacturers are talking to Fusion-io now. But it is a small firm with limited partner bandwidth and would naturally find it easier to deal with less conflicted partners than storage vendors looking at adopting a technology that could destroy much of their existing added value.
Which of these vendors will think Fusion-io is right about SSD use and decide that if their lunch is going to be eaten they will eat it themselves? Will EMC, for example, conceive of OEM'ming a server from Dell, populating it with Fusion-io ioDrives and selling that as its database accelerator? If Flynn is right then that's the sort of thing EMC will have to consider as commodity flashed servers from Dell, HP and IBM start eating Symmetrix' lunch. ®