Scale-out SVC on the way from IBM?
Nehalem and FCoE huddle
IBM is exploring the idea of a scale-out architecture for its SAN Volume Controller (SVC) with possible involvement of flash memory and Nehalem processor. FCoE support may come when needed.
The SVC (SAN Volume Controller) is a SAN virtualising controller attached to the Fibre Channel fabric linking application servers and block-access storage arrays. It virtualises the SAN storage into a single pool and presents it to applications. The SVC is the most popular device for doing SAN virtualisation with over 15,000 units in use by customers. Hitachi Data Systems' USP-V virtualising front-end drive array controller is in second place in the unit ship stakes.
The SVC is based on commodity hardware, with the current version using an xSeries server with a single quad core Xeon 5400, and all-IBM software. Will it use Nehalem, the 5500? Barry White, a chief inventor from IBM Hursley, said: "We use commodity hardware and tend to pick up the latest xSeries. Intel delivers the chips, System X puts it in a box for us, and the SVC uses the xSeries machine. The 5500 (Nehalem) would be the next logical progression."
The SVC was famously used in the QuickSilver project to deliver a million IOPS from data stored in Fusion-io ioDrive flash memory. Is the SVC getting SSD (solid state drive) support? Steve Legg, IBM UK's chief technology officer said: "It's a really good thing to put SSD in the virtualisation layer - but it can still go in a drive array to cut latency there."
White said that the QuickSilver project showed that monolithic and scale-up architecture storage, such as the DS8000, would always hit a performance limit: "Scale-out architecture, SVC and SSD are being explored. This is the way for next-generation use of SSD to go, instead of using SSD as a replacement hard drive."
Legg added that SSD was quite expensive and it was generally good "to use SSD conservatively".
White said that flash as a storage medium was not ideal and perhaps something else would emerge at some time as a storage layer between server DRAM and flash.
A scale-out architecture could imply clustering with linked SVCs co-operating in some way to work collectively rather than as stand-alone nodes. There could then be a failover arrangement to ensure operations continued if an SVC node failed, and load-balancing arrangements as well.
Neither IBMer was able to comment on the rumoured IBM DB2 Pure Scale device, the Oracle Exadata 2-like system.
Turning to FCoE (Fibre Channel over Ethernet), the story is that IBM has no publicly stated commitment to adopting FCoE or bringing out FCoE product. However, White said of the SVC: "We could fit FCoE cards in it. As the need for FCoE SANs grows things like that are very easy to do in the FCoE world."
Legg said: "We have the luxury of doing it when the time is right. We don't have to do it now."
There are standardisation efforts with FCoE and data centre Ethernet afoot, and Legg added: "We prefer to build things to standard interfaces. Where they don't exist we'll encourage them to exist." If strong de facto standards emerge then IBM might support them, with Cisco's FCoE activities mentioned in this regard.
Our take on this is that what we are looking at here is an SVC roadmap involving the incorporation of Nehalem processors with a consequent performance increase, the use of solid state memory as an integrated cache or storage tier, FCoE interfaces when the time is right, and a scale-out architecture where additional SVCs could be added to an initial node to scale performance and I/O bandwidth. ®
Sponsored: Are DLP and DTP still an issue?