This SSD thing is really catching on
Everyone but 3PAR has an SSD adoption strategy
Sun's end to end SSD play: Fresh details of Sun's SSD intentions came from Jason Schaffer, its senior director for storage and networking. Sun is going to use read-optimised and write-optimised SSDs for different applications. He sees three potential flash SSD locations in a product range.
One is quasi-disk drives for storage array use. He sees both read-focussed and write-focussed SSDs here with NAND chips managed by a controller which talks SAS to its outside world. Sun and Pillar are the first vendors to make distinctions between read and write cache uses. Sun's flash drives will come from Intel - think X25-E (right), Marvell and/or Samsung. Schaffer mentioned the Fishworks (fully-integrated software - it just works) project and ZFS here.
This flash forms, along with bulk SATA drives, a hybrid storage pool for ZFS, with data automatically going to the right place.
Secondly, Sun could produce a JBOD chassis with flash drives in it instead of hard drives - JBOF? JBOSSD? Thirdly Sun could produce an HBA form factor flash card, conceptually like an ioDrive from Fusion-io.
Schaffer said: "We'll see flash in every one of our systems." It's a total flash makeover for Sun. Flashed Thumper? Oh yes.
Xiotech: Chief technology officer Steve Sicola (left) says the next-generation Emprise, the product that uses sealed canisters of drives that don't need servicing for five years, will use SSDs. The ISE (Integrated Storage Element) technology already tiers drives with 15K 3.5-inch FC, 15K 2.5-inch FC, and 7.2K FC 1TB drives. (As far as we know no other vendor uses, or can even get access to, 15K 2.5-inch FC drives and 1TB FC drives.) Again, adding a flash tier is relatively easy and the presumption is that Xiotech will go the STEC route with Fibre Channel interface SSDs.
Xyratex: LSI said its new 7900 storage array (used as IBM's DS5000) can have flash shelves added to it in future.
The EMC flash way: Chuck Hollis (right), EMC's blogger extraordinaire and VP Technical Alliances, says there is an Achilles heel in the serve flash use argument. If you flash-enable servers to accelerate application I/O then what happens if the server fails? Obviously you need two servers with some kind of failover, but that then means you need to flash-enable both servers and have two copies of everything in flash. That might be OK for four to six terabytes but it becomes impractical if you have 400 to 600TB of data.
The best way to fix that problem is to network the storage and revert to one copy: "That's called, by the way, a storage array... Servers don't do high availability (HA) access to storage. If you want to make it HA then you have to have a storage array."
Things like replication are easier with a storage array. Storage arrays are popular for a reason, and flash in a storage array is just another medium. The fundamentals, like how to get server high-availability, still apply. He says there will be a use case for server flash use, as a cheaper-then-DRAM cache for example, like Fusion-io: "But it's not shared storage."
Hollis mentioned that there's talk of Iomega bringing out a multi-level cell flash storage box in a year or two.
Storage flash agreement
There is a coherent enterprise storage array vision being presented here. All the vendors agreed that in a few years' time there will only be two kinds of array shelves, active data on flash and bulk data on SATA drives. Fibre Channel and SAS performance-oriented disk drives will wither away, squeezed out. A single drive array controller complex will talk 6Gbit/s SAS to flash and 6Gbit/s SATA to hard disk drives. No doubt 12Gbit/s SAS/SATA.
It's curious that as physical Fibre Channel SAN fabrics are having FCoE touted as their eventual future, internal array Fibre Channel use is having its end-of-life rites prepared too. ®
Sponsored: The Nuts and Bolts of Ransomware in 2016