Adaptec adds DRAM cache to entry-level RAID
It's like performance-enhancing drugs. Kind of
DRAM-caching boosts entry-level Adaptec RAID controller performance past software RAID and cache-less host bus adapters.
Adaptec, the adapter company that vanished down the Steele Partners plughole, exists as a RAID controller operation and brand inside PMC-Sierra. And now it has announced the "Adaptec by PMC family of Series 6E RAID Controllers", which are entry-level controllers given a performance-enhancing drug – DRAM cache.
Adaptec by PMC Series 6E RAID controllers
Jared Peters, the general manager and VP of PMC-Sierra's channel storage division, said: "The Series 6E outperforms software-based HBAs and SATA controllers and is the first true hardware 6Gb/s SATA/SAS RAID controller with on-board DRAM cache for the entry-level market segment with this capability."/p>
The 6405E has four ports, SAS 6Gbit/s RoC ones, while the 6805E has eight.
The two cards have an LP/MD2 form factor and PCIe host interface, x1 for the 6405E, and x4 for the 68-5E – giving it higher throughput. They both have 128MB of DDR2-800 RAM and are compatible with other Adaptec Series 6 RAID controllers. These cards should fit right into customers' existing Adaptec RAID card deployments.
They are available from Adaptec distributors and resellers, now with suggested retail pricing ranging from $200 to $275. ®
OK, I'll qualify that
I'll accept that enterprise-grade hardware RAID may be OK, if you are in that market. This article clearly isn't referring to that market. I wasn't either.
On a modern CPU and motherboard, the overhead of XORing two buffers in negligible. The SATA ports are independant and move data to RAM by DMA. The Memory bandwidth is adequate for RAID-5 operation, even during a rebuild. I've benchmarked it. The Hardware RAID was slower than the same controller in JBOD mode and software RAID. I've benchmarked it. I get pretty much the same performance from software RAID-5 as I get from a single disk, on flat-out all-write activity (the worst case). I got *better* performance from a 3Ware controller in JBOD mode with software RAID, than from hardware RAID-5 on the same controiler.
But even if there were a major efficiency penalty, which there isn't, I'd still take the software RAID. Ask yourself, in five years time, when your RAID controller croaks, are you certain that you'll be able to get a replacement controller with complete on-disk-format compatibility? Are you absolutely sure that if someone acidentally swaps two disk data cables, the controller won't trash your data? Are you absolutely certain that after the controller has been swapped, the next rebuild operation will work the way it should? Are you sure that you'll be OK even if the company that made your RAID controller has gone bust, or has been taken over by a venture capitalist who has sold it on to the highest bidder? And so on. Any answers you get, they'll be supplied by salesmen. Of course it'll be OK. (What was the question? Something technical I don't understand).
At the very best you are locked in to one controller vendor, with the only alternative being many hours downtime while you copy several terabytes from one array on an old controller to a new array on a new controller, quite probably across a network because you can't plug both old and new array into the one system.
I know that with Linux RAID I can shuffle the disks and it won't matter. I know I can take the disks out of one system and plug them into a completely different system, and have the same array up again minutes later. I know that I can replace 250Gb disks with 1Tb disks one at a time, and then resize the array to four times bigger. I know I can reshape a 3-disk array into a 5-disk array. I've done all these things. And I know it'll carry on working effective forever.
There is one critically important thing: make sure your RAID system is connected to a UPS, and that the UPS is correctly configured so that it *will* perform a clean system shutdown when its batteries get low. Battery-backed RAID controller I hear - well, let me tell about such a system, when the motherboard failed, and two disks got swapped when the thing was re-assembled in another box, working against the clock of a discharging battery. What my source thinks happened, is that it flushed its RAM cache onto the disks first, and only then noticed that the disks were swapped, and then quietly reconfigured its array so the filesystem corruption had plenty of time to spread. But of course, it's all secret-source firmware running on secret hardware, so the only certainty is that it barfed, and the data got scrambled.
oooooooo!! 128mb of cache
Well that's really going to take the heat off a 256gb SSD isn't it??
Adaptec Technology ...
I wonder if they have improved any ....