QLogic drops veil on new flashy adapter technology
HBA flash caching
QLogic is adding flash caching capability to its storage network adapter cards in a project called Mount Rainier.
The company makes host bus adapters (HBAs) to connect servers to Fibre Channel accessed SANs and converged network adapters (CNAs) to link servers to iSCI and FCoE accessed SANs as well as to Ethernet networks. QLogic also makes Fibre Channel fabric switches and storage routers, and router data moving technology will be used in Mount Rainier products when they ship.
PCIe flash cards are being used to cache frequently accessed data in servers as a way of avoiding both storage array disk and network latency, and so enabling applications to run faster and the servers to support more virtual machines. QLogic's adapter cards connect to a server's PCIe bus as do PCIe flash cache cards made by suppliers such as Fusion-io and EMC with its VFCache product. QLogic isn't putting flash storage directly onto its HBA products though, using a separate card approach instead.
There will be three options:
- HBA with a separate flash card and PCIe link using an x4 cable drawing 25W
- HBA with a SAS IO port daughter card which links to an SSD in the server chassis
- HBA with an integrated SSD daughter card with all power coming from the PCIe slot - 50W.
An aspect of its appeal to its OEMs that the existing PCIe flash card approach needs a PCIe flash card driver plus a driver for the adapter connecting the server to a storage array and a filter driver for each virtual machine guest operating system, if the server is virtualised. With Q, the OEMs can use one driver instead, simplifying things.
QLogic's caches will also be shared among the servers in a distributed caching scheme. This comes as part of an intelligent ASIC using storage router technology. The products will be able to run in both host and target initiator mode, which means that each Mt Rainier product can use the SAN fabric infrastructure and see what the other cards in the SAN zone are doing. Each card will know which LUNs have been cached, the information being shared using Fibre Channel protocols.
Q's EMEA marketing head Henrik Hansen said that the application effectively sees only the HBA driver.
The host server CPU is not needed to manage and operate the flash card, as with Fusion-io. There is cache mirroring for data protection and both write back and write through policies will be supported. Cache data will be able to be auto-synced with the SAN and it will be possible to pool the cache between two servers. Hansen said vMotion simply meant a redirection of cache access as the app thinks it is talking to a SAN.
Hansen said the IO part of Q's flashed HBAs is 8Gbit/s Fibre Channel but that might change in the future, with Ethernet iSCSI and FCoE options. There will be vCentre plug-ins in the medium term with the v1.0 product having a CLI. OEMs could use their own management platforms to manage the Mount Rainier products via a QLogic API.
NetApp is supporting the QLogic flashed HBAs with its Flash Accel caching software. This is a great first win for QLogic and bodes well for approaches to its HFA OEMs.
We can expect the first Mount Rainier products to ship in the first half of 2013 with both clustered and non-clustered products. ®