Feeds

Scale-out SVC on the way from IBM?

Nehalem and FCoE huddle

Remote control for virtualized desktops

IBM is exploring the idea of a scale-out architecture for its SAN Volume Controller (SVC) with possible involvement of flash memory and Nehalem processor. FCoE support may come when needed.

The SVC (SAN Volume Controller) is a SAN virtualising controller attached to the Fibre Channel fabric linking application servers and block-access storage arrays. It virtualises the SAN storage into a single pool and presents it to applications. The SVC is the most popular device for doing SAN virtualisation with over 15,000 units in use by customers. Hitachi Data Systems' USP-V virtualising front-end drive array controller is in second place in the unit ship stakes.

The SVC is based on commodity hardware, with the current version using an xSeries server with a single quad core Xeon 5400, and all-IBM software. Will it use Nehalem, the 5500? Barry White, a chief inventor from IBM Hursley, said: "We use commodity hardware and tend to pick up the latest xSeries. Intel delivers the chips, System X puts it in a box for us, and the SVC uses the xSeries machine. The 5500 (Nehalem) would be the next logical progression."

The SVC was famously used in the QuickSilver project to deliver a million IOPS from data stored in Fusion-io ioDrive flash memory. Is the SVC getting SSD (solid state drive) support? Steve Legg, IBM UK's chief technology officer said: "It's a really good thing to put SSD in the virtualisation layer - but it can still go in a drive array to cut latency there."

White said that the QuickSilver project showed that monolithic and scale-up architecture storage, such as the DS8000, would always hit a performance limit: "Scale-out architecture, SVC and SSD are being explored. This is the way for next-generation use of SSD to go, instead of using SSD as a replacement hard drive."

Legg added that SSD was quite expensive and it was generally good "to use SSD conservatively".

White said that flash as a storage medium was not ideal and perhaps something else would emerge at some time as a storage layer between server DRAM and flash.

A scale-out architecture could imply clustering with linked SVCs co-operating in some way to work collectively rather than as stand-alone nodes. There could then be a failover arrangement to ensure operations continued if an SVC node failed, and load-balancing arrangements as well.

Neither IBMer was able to comment on the rumoured IBM DB2 Pure Scale device, the Oracle Exadata 2-like system.

Turning to FCoE (Fibre Channel over Ethernet), the story is that IBM has no publicly stated commitment to adopting FCoE or bringing out FCoE product. However, White said of the SVC: "We could fit FCoE cards in it. As the need for FCoE SANs grows things like that are very easy to do in the FCoE world."

Legg said: "We have the luxury of doing it when the time is right. We don't have to do it now."

There are standardisation efforts with FCoE and data centre Ethernet afoot, and Legg added: "We prefer to build things to standard interfaces. Where they don't exist we'll encourage them to exist." If strong de facto standards emerge then IBM might support them, with Cisco's FCoE activities mentioned in this regard.

Our take on this is that what we are looking at here is an SVC roadmap involving the incorporation of Nehalem processors with a consequent performance increase, the use of solid state memory as an integrated cache or storage tier, FCoE interfaces when the time is right, and a scale-out architecture where additional SVCs could be added to an initial node to scale performance and I/O bandwidth. ®

Secure remote control for conventional and virtual desktops

More from The Register

next story
NSA SOURCE CODE LEAK: Information slurp tools to appear online
Now you can run your own intelligence agency
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
NASA launches new climate model at SC14
75 days of supercomputing later ...
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
BOFH: WHERE did this 'fax-enabled' printer UPGRADE come from?
Don't worry about that cable, it's part of the config
Stop the IoT revolution! We need to figure out packet sizes first
Researchers test 802.15.4 and find we know nuh-think! about large scale sensor network ops
SanDisk vows: We'll have a 16TB SSD WHOPPER by 2016
Flash WORM has a serious use for archived photos and videos
Astro-boffins start opening universe simulation data
Got a supercomputer? Want to simulate a universe? Here you go
prev story

Whitepapers

Designing and building an open ITOA architecture
Learn about a new IT data taxonomy defined by the four data sources of IT visibility: wire, machine, agent, and synthetic data sets.
5 critical considerations for enterprise cloud backup
Key considerations when evaluating cloud backup solutions to ensure adequate protection security and availability of enterprise data.
Getting started with customer-focused identity management
Learn why identity is a fundamental requirement to digital growth, and how without it there is no way to identify and engage customers in a meaningful way.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Business security measures using SSL
Examines the major types of threats to information security that businesses face today and the techniques for mitigating those threats.