Scale-out SVC on the way from IBM?

Nehalem and FCoE huddle

Choosing a cloud hosting partner with confidence

IBM is exploring the idea of a scale-out architecture for its SAN Volume Controller (SVC) with possible involvement of flash memory and Nehalem processor. FCoE support may come when needed.

The SVC (SAN Volume Controller) is a SAN virtualising controller attached to the Fibre Channel fabric linking application servers and block-access storage arrays. It virtualises the SAN storage into a single pool and presents it to applications. The SVC is the most popular device for doing SAN virtualisation with over 15,000 units in use by customers. Hitachi Data Systems' USP-V virtualising front-end drive array controller is in second place in the unit ship stakes.

The SVC is based on commodity hardware, with the current version using an xSeries server with a single quad core Xeon 5400, and all-IBM software. Will it use Nehalem, the 5500? Barry White, a chief inventor from IBM Hursley, said: "We use commodity hardware and tend to pick up the latest xSeries. Intel delivers the chips, System X puts it in a box for us, and the SVC uses the xSeries machine. The 5500 (Nehalem) would be the next logical progression."

The SVC was famously used in the QuickSilver project to deliver a million IOPS from data stored in Fusion-io ioDrive flash memory. Is the SVC getting SSD (solid state drive) support? Steve Legg, IBM UK's chief technology officer said: "It's a really good thing to put SSD in the virtualisation layer - but it can still go in a drive array to cut latency there."

White said that the QuickSilver project showed that monolithic and scale-up architecture storage, such as the DS8000, would always hit a performance limit: "Scale-out architecture, SVC and SSD are being explored. This is the way for next-generation use of SSD to go, instead of using SSD as a replacement hard drive."

Legg added that SSD was quite expensive and it was generally good "to use SSD conservatively".

White said that flash as a storage medium was not ideal and perhaps something else would emerge at some time as a storage layer between server DRAM and flash.

A scale-out architecture could imply clustering with linked SVCs co-operating in some way to work collectively rather than as stand-alone nodes. There could then be a failover arrangement to ensure operations continued if an SVC node failed, and load-balancing arrangements as well.

Neither IBMer was able to comment on the rumoured IBM DB2 Pure Scale device, the Oracle Exadata 2-like system.

Turning to FCoE (Fibre Channel over Ethernet), the story is that IBM has no publicly stated commitment to adopting FCoE or bringing out FCoE product. However, White said of the SVC: "We could fit FCoE cards in it. As the need for FCoE SANs grows things like that are very easy to do in the FCoE world."

Legg said: "We have the luxury of doing it when the time is right. We don't have to do it now."

There are standardisation efforts with FCoE and data centre Ethernet afoot, and Legg added: "We prefer to build things to standard interfaces. Where they don't exist we'll encourage them to exist." If strong de facto standards emerge then IBM might support them, with Cisco's FCoE activities mentioned in this regard.

Our take on this is that what we are looking at here is an SVC roadmap involving the incorporation of Nehalem processors with a consequent performance increase, the use of solid state memory as an integrated cache or storage tier, FCoE interfaces when the time is right, and a scale-out architecture where additional SVCs could be added to an initial node to scale performance and I/O bandwidth. ®

Remote control for virtualized desktops

More from The Register

next story
Fat fingered geo-block kept Aussies in the dark
NASA launches new climate model at SC14
75 days of supercomputing later ...
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
You think the CLOUD's insecure? It's BETTER than UK.GOV's DATA CENTRES
We don't even know where some of them ARE – Maude
Trio of XSS turns attackers into admins
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
BOFH: WHERE did this 'fax-enabled' printer UPGRADE come from?
Don't worry about that cable, it's part of the config
Stop the IoT revolution! We need to figure out packet sizes first
Researchers test 802.15.4 and find we know nuh-think! about large scale sensor network ops
prev story


Why cloud backup?
Combining the latest advancements in disk-based backup with secure, integrated, cloud technologies offer organizations fast and assured recovery of their critical enterprise data.
A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
10 threats to successful enterprise endpoint backup
10 threats to a successful backup including issues with BYOD, slow backups and ineffective security.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
The next step in data security
With recent increased privacy concerns and computers becoming more powerful, the chance of hackers being able to crack smaller-sized RSA keys increases.