Fusion-io ups SSD ante
640GB throw down
Solid state storage maker Fusion-io has upped the ante in the SSD game, launching the ioDrive Duo, a PCI-Express peripheral card with 640GB of capacity.
The ioDrive Duo is a kicker to the company's first generation of SSDs and doubles up the capacity by cramming two modules on the same board. The ioDrive Duo is rated at 186,000 read I/O operations per second (IOPS) on reads (using 4Kb packets) and a surprising 167,000 IOPS on writes (also using 4Kb packets). Fusion-io says the device has a latency of under 50 microseconds.
The SSD has multi-bit error detection and correction electronics and a feature called flashback that provides n+1 chip-level redundancy on the board, so if one flash chip craps out the board can heal around it. The board also has RAID 1 mirroring if customers are willing to sacrifice capacity for higher levels of data protection. The drive comes in two flavors: one that fits in a PCI-Express 1.0 x8 slot and another that plugs into a PCI-Express 2.0 x4 slot.
The SSD is made using Samsung flash memory and is being offered in 160 GB, 320 GB, and 640 GB capacities starting in April. It will be available in a 1.28 TB capacity sometime in the second half of 2009.
Specific pricing was not available for the ioDrive Duo, according to a Fusion-io spokesperson, because the company is redoing its price list tomorrow. What the company could say is that prices for existing SSDs are coming down and that the ioDrive Duo will sell for "well under $30 per usable gigabyte."
This week, Texas Memory Systems announced its own hefty SSD, a PCI-Express x4 card that holds 450 GB of capacity and lists for $18,000. Looks like Texas Memory is going to have to cut its prices by about 25 per cent or more. ®
I can see plenty of use for these things in transactionally intensive systems. It's not primarily the I/O rate - at the expensive of enough spindles you can get the IOPs or data rate. What really, really matters is the latency. 50 microseconds is a factor of 100x better than physical disks can do on uncached operations, and perhaps 20 times better than a cached operation over a typical FC SAN array.
In many cases we improve performance by throwing RAM at the database - even SSDs can't match a logical read. However, cache hits never hit 100% unless you can fit the whole DB in memory - you get into laws of diminishing returns. There is a further, and much more difficult issue - that is startup time. You might have a nice 200GB DB cache sitting there, but populating it from startup with 8KB random read blocks at a time can take 10s of minutes during which time your application servers are choking on the backlog of users trying to get back on. With a few of these things sitting on PCI-X buses that cache will fill much, much faster and during that startup period users won't be seeing response times extended by a factor of 10 whilst the cache is warmed up.
This sort of problem happens if you have an uncontrolled failover in an HA cluster, it also happens if you have to start a DR instance, and even Oracle RAC can suffer from severe periods of "brown-outs". Also, if putting a few TB of this stuff into a big-iron server enables you to halve the amount of incredibly expensive RAM (by PC standards) that you are using. It might even cost in.
However, there is an enormous problem - as these sit inside a server, they are fundamentally unsuited to shared memory clusters. That's a big, big problem for big enterprise systems. Putting SSDs into fibre SANs introduces a major bottleneck. Current arrays don't go near coping with this number of IOPs for a given amount of storage. Also, stick this through a normal I/O stack in the server, FC cards, SAN switches and arrays and you are looking at latency times approaching 1ms. So current shared storage architectures introduce latency of perhaps 20x what this can do, and I rather suspect a similar proportion of potential IOPs. Put this in an array and you might get 10x improvement in (uncached) latency whilst the technology could do 100x., and probably something similar on IOPs.
In the absence of a very low latency shared-storage version of this architecture, then maybe synchronous replication of databases across two machines. Do it across infiniband and you might see 0.5ms addition on synchronous replication to a second instance of the DB. It could work except for very write-intensive DBs as a cost, and it wouldn't be cheap.
So how much am I quoted for 20TB of this stuff, I have the ideal app...
Come on, it's early days. I remember back in the late 80s when the PCs at our college had hard discs installed, and hefty 20Mb drives at that! Perhaps it won't be all that long before the hard drives of today look as dated as punched paper tape and magnetic drums.
Never mind bigger SSD drives, how about producing more smaller drive so at least one dealer in the UK actually has some? As an aside that would be the ones that people can actually afford!