Compellent - the billion-dollar storage company?
Screw the recession up
Road map 2009
The whole Compellent product idea is built on having a single architecture that can scale. Soran and his team are very keen to say that they have a storage architecture of sufficient granularity (the tracking of blocks on drives), on which they can layer more and more functionality. They say that, for example, their automated placement of blocks of data on different tiers of storage is unique, based on a block's meta data and activity level. That means the company's Storage Center FC/iSCSI SAN product can adopt solid state drives (SSD) with no real upset at all and with no need for manually setting up volumes based on an SSD tier of storage in the array.
Customers are already testing Compellent arrays with STEC SSDs plugged into drive slots in the Xyratex-sourced array enclosures. Compellent marketing head Bruce Kornfeldt said: "You will not need a lot of SSDS to get really good performance with Compellent. Everyone else will have to have 8 or 10 SSDs. We'll only have to have four (and) existing customers can slot SSD into drive slot and, with the latest firmware, their controller will recognise it."
Coming later this year is a hoped-for qualification from Cisco for Compellent storage hooked up to Cisco's UCS virtualised and unified blade server and networking system. In probable connection with this, Fibre Channel over Ethernet (FCoE) support is coming.
It seems that both Emulex and QLogic are possible sources of this technology, QK+Logic iSCSI and FC cards being shipped by Compellent currently. Emulex points out that its Gen 2 converged network adapter (CNA) card will be half the size of the Gen 1 card and include iSCSI, TCP/IP offload, FCoE, 10gigE, RDMA, and iWARP - think host clustering - in its single ASIC. The company expects that the equivalent QLogic product will only have FCoE and 10gigE. Emulex can offer, it says, better use of server and controller ports.
SAS drive technology will be introduced into Storage Center in the third quarter of this year. There will still be the option of a Fibre Channel link to host servers, and that will rise to 8Gbit/s speed. A card swap will be all that is needed to accomplish that, but the direction is for SAS use inside the arrays, with 6Gbit/s SAS 2 helping out. Initially, the maximum number of SAS drives in a Compellent array will be limited, probably to around a hundred.
Virtual controller ports will be introduced with the current need for reserve ports abolished. This and the SAS drive option will provide for lower-cost entry-level product. This will also lead to port failover within a controller.
There will be a new version of the snapshot facility, known as Replay Manager 5.0. and a parallel release of the Enterprise Manager software, also version 5.0. Consistency groups will be added, and it will be possible to map hundreds of servers, a thousand or more perhaps, in a single operation.
With Replay Manager 5.0, it should be possible to achieve a Recovery Point Objective (RPO) of zero. Marty Sanders, VP for technology services, said: "Users will be able to move volumes between Storage Centers with no downtime - they'll have live transparent volume movement between storage centres (like vMotion) with no disruption and no changes to running apps."
Compellent will offer a Portable Volume facility, a means of jump-starting replication by avoiding an initial great long online session to transfer data. You can copy and encrypt it instead to one or more 1.5TB USB-connected external drives, which are physically carried over to the remote site. Then only changes, the deltas, are replicated online.
Protection is being enhanced to cope with double drive failures, RAID DP.
Sanders mentioned that 10,000rpm Fibre Channel drives will probably go away. Compellent uses Seagate drives and Seagate is going to price 10Ks at the same level as its 15,000rpm drives. There might be low-power 10K drives left in Seagate's lineup but the direction is to the elimination of 10K FC drives.
Sponsored: Benefits from the lessons learned in HPC