Freak lightning strike sends app, storage servers back in time
EMC hits 88MPH, returns to mainframe era
Server vendors, prepare for an attack. Lightning strikes are coming - and they're welding app and storage boxes together in a way that reminds El Reg of the mainframe era.
EMC's array-controlled server flash initiative, Project Lightning, is getting ready. There may be announcement before Christmas, but it won't be a happy Christmas for the server vendors - not if what El Reg has put together from hints, nods and whispers is true.
Consider what will happen if server apps can get data from storage arrays in nanoseconds instead of milliseconds: instead of data being fetched from a VNX or VMAX, or even Isilon, array across a SAN fabric with slow delivery due to disk latency and fabric transit time, it is available pretty damn well instantly, directly across the server's PCIe bus from flash memory.
Because, in some secret sauce fashion, the array knows what data the application is going to need and pre-loads the flash using FAST-VP. Data that's written goes into flash and the app can carry on working while a background process copies it to disk back in the array. To be more certain it's safe to do this, Lightning flash cards can be dual'ed and mirrored.
Faster apps need fewer servers
The net of this is that an application's run time could be halved, even quartered; it depends how I/O-bound it is. Customers could then say to themselves: "Okay, we have saved 50 per cent - for argument's sake - of our server application suite's run time. What shall we do with this recovered server resource?
"We could double the virtual machine density of our servers, and cut their number in half, or we could use servers with half the processing power; two-socket ones instead of four-socket ones."
Either way the server suppliers will take a hit and so will software suppliers who license by processor core; fewer processors and cores will be needed. EMC isn't a server vendor and its revenues won't be affected. The main server vendors - Dell, HP and IBM - don't have technology that can do what Project Lightning does because their arrays can't manage the server flash as Project Lightning does.
The end-game here is to weaken the server vendors and then, as storage and servers become progressively more and more co-located while the SAN becomes a server-area network (or DAN in Fujitsu's terms) instead of a storage area network, we will see VMAX arrays with extra engines to run app software. This could be 4, 8, 16, 32, 64, even 128 extra engines, who knows how many, with a speeded up Virtual Matrix taking care of the server-storage inter-linking and - is this a master stroke or what? - the app engines being equipped with Lightning flash cards, and everything running in an ESX-managed environment.
Is this real or just a fevered tech-obsessed hack's fantasy?
You better believe it
EMC is pushing the converged IT stack game and has been open about its arrays running app software and about the Lightning server flash effort. Why else is it doing this? This is not a company that commits suicide; seppuku is not in its game plan.
Yet the direction of VMware's storage function development is towards commoditised networked storage arrays, EMC's included, and we've been wondering if EMC will cripple this part of VMware or somehow evade the VMware trap that threatens every networked storage array vendor.
This is how: EMC's strategy people are saying, in effect, let standalone networked storage array vendors (the block-heads and the filer guys) fall into the VMware trap, because we will circle around it and use VMware to transition apps from running in standalone servers into server-storage powerhouses - or mainframes as people used to call them - and run apps faster than anyone else - Oracle, IBM, whoever. We'll evade the VMware trap laying in wait for storage array vendors, EMC's chiefs say, and instead use it to attack the server vendors.
It's a breathtaking idea and will, if it succeeds, propel EMC into the big-time, an equal in revenue terms to the server vendors. How about that? ®
I have a shed full of punch card readers and 132 column printers if anyone needs them.
Far from dying.
I'm certainly seeing more of the PCI based cards as well, but as most customers who take this approach are discovering, they're not shared and as a result, constrained to the host/s the card is installed in and of course the lack of or limited protection offered by these cards.
Where as, from a SAN perspecitve, there may be an inherit higher latency than PCIe Cards accessed directly as SAN based has to account for the latency introduced by having to go from Bus>HBA>Fibre>Switch>Fibre>Array>disks and then back again, but the SAN based solution offers protection against a single point of failure. These Cards (FusioIO/VeloDrive etc.) do not.
And as an added benifit, they share the available capacity and performance, where as the cards do not. For example. the smallest FusionIO Drive is 160GB, but if you only need 40GB, then you waste 120GB, where as in a SAN environment, if you have the same, you could re-use that 120GB else where on another application that could do with it.
And, again, get the protection of not having a single point of failure.
What Project Lighning is doing is introducing a hybrid of the best of both worlds:
-Caching frequently used data closer to the application using it; via the PCIe bus with a lower latency and;
-Providing all of the goodness and protection that a SAN array provides. (ie. No single point of failure)
Doesn't really matter what percentage has EMC installed....
(which is a lot more than anybody else really)
What really counts here is that with the offering of a converged storage and compute is the ability to have a consolidated envrionment, opening many more potential customers up to EMC. (Or any other vendor who takes the same approach.)
- If EMC were to take the converged storage/server capabilites into the VNXe line, customers would be given the opportunity to utilise Virtuallised servers within Virtualised storage, all pre-qualified with all or most of the bit's they would ever need in a low cost package.
That would cover the SMB market very well as the VNXe has a very low cost of establishment.
- Put it in the VNX line and you'd have the SMB<Midmarket and upper mid market squared away.
- With the VMAX lineup, covering the upper MidMarket (VMAXe) to big end of town with the VMAX.
Which ever market segment customers chose, having all of the bases cover, increases foot print, and this can only be good for EMC and thier customers.
The biggest advantage to the customers, if of course having it built as a pre-qualified, pre-tested, high-grade storage and servers doing exactly what they need and because it's all pre-qualified and tested, customers wouldn't have to worry about what will and won't work.
As for Project Lighting, I'd suggest it's more of an extension of FastCache rather than FastVP, which would make sense as it would behave as a local cache rather than a component of a tier, similiar to PAM-II/Flash Cache but at the host side.