Compellent adds file-level access to SAN
Unified storage here we come
Compellent is adding integrated file-level access to its SAN product, and using Sun's open source ZFS to do so.
The zNAS product is a 1U enclosure running the file access software on diskless, dual quad-core Nehalem hardware, which can be clustered in two nodes for high availability. It has 1Gbit Ethernet client access, with 10GbitE coming, and 8Gbit/sec Fibre Channel access to the backend storage. There are 24GB and 48GB memory options.
The file access is conceptually layered onto Compellent's storage array such that it benefits from all the features of that product concerning a single virtual pool of storage, thin provisioning, solid state drives, multiple types of hard drive, automatic data progression for moving data blocks between tiers and so on.
The Compellent SAN thus becomes a network-attached storage (NAS) product, offering NFS and CIFS access, with a single management facility for both the file world and the block, storage area network (SAN) world.
This is, in effect, a significant update of the existing NAS head facility which is a 1U Xeon-powered box running Microsoft's Windows Storage Server. The box has been given an extra slug of processing power and WSS replaced with ZFS.
ZFS or the Zettabyte File System is a 128-bit file system which is outrageously scalable and has checksum technology to verify data integrity. Compellent has chosen Nexenta to be its ZFS development partner, because of its "deep engineering-level expertise". It says the product is backed all the way and supported by Compellent.
Compellent says it asked its customers what they wanted and a big priority was integrated, high-performance and scalable file access. Marketeer Bruce Kornfeld said: "ZFS fit the bill perfectly."
He was confident that the lawsuits over NFS between NetApp and Sun, now Oracle, were very low-risk, saying that in ten years of open source software lawsuits had raised their heads but nothing had happened. Also: "Oracle is very committed to its open storage acquisition."
It cites IDC as saying that the file market is growing ten times faster than the block storage market, although it also quotes Gartner saying the SAN market is currently five times bigger than the NAS market.
Kornfeld said: "Creating a new fileshare is very easy. Most everything is done automatically on the backend." Compellent says its customers should find that "unified SAN/NAS management simplifies provisioning and recovery of virtual servers in VMware, Microsoft, Citrix and Oracle environments".
Unified block and file storage has become very fashionable of late, with NetApp having helped make it mainstream, Pillar offering it since day one, and EMC looking to converge its CLARiiON block and Celerra file storage products. Many iSCSI arrays, from vendors such as Reldata and Nimbus, have NAS access facilities such that it's rare to find an iSCSI array these days that doesn't have file access.
Pillar CEO Mike Workman distinguishes between Pillar and NetApp's native unified storage and NAS head or gateway-based approaches, blogging that: "Most vendors stick a NAS gateway device in front of their block device. Interposing a gateway gives you two management interfaces and the management overhead of provisioning storage on both devices to get the job done once."
Compellent says it has a single pane of glass management and the provisioning is not difficult because most of it is automated and done by the backend SAN storage.
Customers can buy Compellent's unified storage with a two-node clustered zNAS setup or buy the clustered zNAS nodes only. The products will be generally available by the end of June.
Excluding taxes, maintenance and services, Compellent unified storage starts at £54,600 with two clustered zNAS nodes, two clustered SAN controllers, 8.7TB of SAS storage capacity and Compellent SAN software. Adding two clustered zNAS nodes to an existing Compellent SAN starts at £23,400 excluding taxes, maintenance and services. ®
"Here we come?" Unified storage already here, no?
Article states "Unified storage here we come." Second time in two months I see article about other storage companies doing what Sun already did. Anyone talked to Oracle lately about this? Or is this space for start ups only?
Sun launched unified storage in 2008, out of project Amber Road. Today Oracle sells it as the Sun 7000 unified storage family which incorporates SAS/SATA, flash, ZFS, Dtrace, fibre channel, iSCSI,... Perhaps there are differing definitions of what unified storage means, and when it is available?
So Compellents conception of "Unified Storage" is a traditional SAN array + a couple of 3rd party NAS head ducttaped together?
Given enough ducttape is available, why not add a couple of servers, a couple of switches and an administrator (male or female) to "create worlds first selfadministering unified computing thingy"
The unified query (YUK) paradigm, essentially a whopping big space containing lists of objects (i.e. tables,) events (i.e. immutable time sensitive effective sequences,) and generalised pools of objects, has come a long way in a short time. The functional results of unified query, (i.e. the ability to optimally generate efficient, but very complicated resultsets can only really be done, by having super huge data space, and a parallel despatcher engine, which fires off multiple implementations of each of the resultset processes and terminates the lot when the most efficient one has finished.
The derived data component of unified query, also provides the ability to build and throw away enterprise service datasets (i.e. derived data which can be reported, but whose underlying data can't, and whose cost of generation is too expensive to be done multiple times anyway.) An example of the latter would be the generation of average speeds of cars by registration number. The host system can know the locations and times, and compute running averages which can be replicated out individually or as a set, but withold the roads upon which they were clocked.
A vast array of virtual memory has the ability to answer ridiculously complex queries due to the scale of fast space allocation, that conventional computing can't.
It would be ridiculous to ridicule this in the fashion of Eigen, just because the maths hasn't yet got an application.
*** Resultset process - a set of processes which all do the same thing in different ways.
eg. a simple example would be sorting unknown amounts of data. The despatcher contains a list of functions (e.g. shell, quick, bubble) that sort data, and gives them all a go at it, they all work in different ways, and so one of them is likely to be more efficient. When one finishes, the others are aborted, causing their enterprise dataset to be deleted and the space reclaimed, the one which finished has its data kept (or deleted, depending on cost benefit analysis - a result set is just a function* of dry (- don't repeat yourself) data anyway.) This is the exact opposite to current RDBMS which choose a single algorithm, based on clever guessing.
At the enterprise scale, the sort will be one of a hierarchy of functional results, in a Lisp like defun() though using Linq like functionality. The hierarchy of the functions providing the answer. This can simply not be done using conventional architectures.