It hopes that Matrix will provide that push. The company is talking to various non-IT stack-aligned vendors about participating in Matrix. McDonald isn't identifying any, but we might imagine Ocarina (dedupe) and Caringo (CAS) are the kinds of vendors he has in mind. The Matrix controller will sit outside the Matrix storage resource containers and provision/de-provision them as needed by the storage-controlling app in the servers.
It doesn't exist yet and will need software and interfaces. We might hear formally about Matrix, see a v1.0 release of some sort, by the mid-year point. The Cortex API, used by storage system resource functions to talk direct to ISE as a control path, with Matrix being a data path, will have a v1.0 announcement this quarter.
Matrix is a very channel-friendly idea, "Isn't it!" says McDonald. On the OEM channel front Xiotech is still immensely keen to recruit OEMs to take its ISE boxes. McDonald says it's energetically talking to prospective OEMs, again not identifying any.
The ISE boxes themselves have their own roadmap but flash doesn't figure on it, yet. There is still no justification, in McDonald's view, for adding flash. It's just too expensive and has hideous read:write assymetry as well as endurance problems. When it gets cheap enough and these issues can be overcome, then we might see a flash ISE.
What's likely to come first is an ISE front-end interface change. Currently an ISE box has two 4Gbit/s Fibre Channel ports. Ethernet is very likely to be added in its 10GBit form, but Xiotech is undecided about whether to offer iSCSI or FCoE layered on top of it.
The drives inside ISE may become 6Gbit/s SAS ones and the drive-ISE controller fabric may change from 4Gbit/s FC to a SAS one as well.
The message is that ISE boxes can be aggregated together in an ISEberg or JBOI (just a bunch of ISEs) and controlled by mid-tier storage resource applications like dedupe, CAS, replication, whatever, and/or by direct storage-controlling server apps like VMware, Exchange 2010 and Oracle. Bypass expensive and complex storage arrays, with fat controllers and high-maintenance disk drives that lose performance as they fill, by using collections of superdisks, ISE boxes, under the direct control of server apps. That's the Xiotech mesage in a nutshell.
Will it fly? Will ISE boxes invade more data centres? It's the big question and the Xiotech team is using its $10m funding to make sure that data centre owners at least understand the ISE and Matrix eco-system message and ask themselves: "Could we? Should we?" ®
My name is Rob Peglar, and I am VP, Technology for Xiotech. I wanted to respond to several points in the well-written commentary above.
1) The ISE is indeed a block device, but we have integrated it at the filesystem level (e.g. NTFS, ext3, etc.) by using Web Services and/or RESTful techniques. So, we can perform things like filesystem expansion that other arrays cannot. We have also provided automatic VMFS/VMDK construction from a single screen, from SAN layer to ESX layer to VM layer, for nearly two years; filesystem-aware.
2) These techniques also apply to thin and dynamic provisioning methods. So, it is technically incorrect to say that 'thin' (i.e. legacy over-commit allocate-on-write) is required 'on the array'. What is required is for the user to select provisioning style and for the system at large - filesystem, OS/hypervisor, and array - to understand it. Xiotech has had its implementation of thin, marketed as Intelligent Provisioning, for 18 months now.
3) We have automatic space reclaimation in terms of the ability to shrink a LUN and then perform Web Services/REST to shrink the filesystem as well, on the fly. This technique currently works for NTFS, since it's the first OS/HV to implement shrink filesystem on the fly.
4) You don't need 'another layer' for file-aware operations; you need to communicate with the OS/HV, and we do that via Web Services/REST. It's simple and effective and doesn't require extra software shims inside the host.
5) We have the 'PowerNap' method for the ISE, which puts the entire ISE to sleep (and wakes up similar to wake-on-LAN) thus mitigating the problem of individual disk control. This technique is very useful after backup and/or archive jobs run. It's not an effective technique for transactional workloads - but then again, as the commentor rightly says, neither is individual spin down.
I read through what this is supposed to be a few times but still can't tell what they are talking about with this matrix thing.
As for thin provisioning, you need native support for it in the array *if* your array is going to be a shared resource, and the way things are going with consolidation there will be more and more arrays that are supporting more than one app.
Application level thin provisioning doesn't save much on the array side, because you have to provision a thick LUN to the app. So if you export a 1TB thick LUN and you start using VMware-based TP you can get TP at the application level(e.g. create 20 VMs each with 100GB of TP space and maybe you only use 20x5GB instead of 20x100GB). But that 1TB of data is still hard allocated on the array if the array itself is not using TP.
You could start out with smaller volumes on the array side and grow them dynamically, then somehow inform the app that the volume is bigger and it can format that extra space. Seems like a lot of extra work that can be avoided just by using TP on the array to begin with.
Also consider next gen thin provisioning which involves automatic space reclamation on the array, though transparent application support for that stuff is still tiny at this point.
as for Orcarnia, last I checked they were a file-based solution. Xiotech is a block based solution so you'd need another layer in there for the file based stuff, which to me means theres not a lot of point in trying to directly integrate Orcarnia with Xiotech.
Xiotech also does some pretty neat wide striping, and of course wide striping is the arch enemy of spin down, want to read that 20GB of data? I need to spin up 30 disks to do it..