This article is more than 1 year old

Dell cranks vStart virty server stacks to 1000

Converged blade data center for raw iron

Updated With new servers, switches, and storage arrays in the field, the next thing that a modern IT supplier needs to do is create converged system stacks based on that new iron, focusing on the hardware and pitching the benefits of having everything pre-integrated.

Dell is hosting its Storage Forum in Boston, Massachusetts, Microsoft is hosting its TechEd 2012 event in Orlando, Florida, and the CloudExpo is going on in New York City as well, and these are the occasions on which Dell chose to hook the announcement of two integrated stacks. One stack is aimed at virtual server workloads and the other is a data center-in-a-box setup designed to be raw iron for all kinds of workloads and aimed specifically at customers looking for high-density infrastructure.

Dell vStart 1000 converged stack

Dell's vStart 1000 stacks for supporting scads
of virtual machines

The vStart 1000 is the new top-end converged stack for supporting either VMware's ESXi or Microsoft's Hyper-V server virtualization hypervisors, and as the name suggests, it is pre-configured to support around 1,000 virtual machines in its full configuration.

Dell starting building its prefabricated chunks-o-cloud back in April 2011, when it rolled out the vStart 100v cloud stacks based on three PowerEdge R710 servers, its EqualLogic PS6000XV disk array, and four PowerConnect 6248 48-port Gigabit Ethernet switches, for a cost of around $1,000 per VM. (The v in the vStart 100v is short for VMware and it means it is preconfigured to run the ESXi hypervisor.)

The vStart 200v kicked up the PowerEdge R710 server count to six and the virtual machines up to 200 and the price down to $845 per VM. Variants of the stacks to support up to 100 or 200 VMs atop Microsoft Hyper-V – these are the vStart 100m and 200m stacks.

In September last year, Dell shrank the configurations launching the vStart 50v and 50m in a half rack baby cloud. The vStart 50m, has two of Dell's Xeon-based PowerEdge R610 rack servers, four 24-port PowerConnect 6024 switches (providing redundant links for both the SAN and the LAN), and an EqualLogic PS4100XV disk array with 7.2TB of capacity, all in half-rack, with three years of Dell's Pro Support maintenance services slapped on it.

Dell slaps another R610 server in the vStart 50v to run VMware's vCenter Server management console. The vStart 50m costs $59,900, or just under $1,200 per VM, with Microsoft's Windows Server 2008 R2 Datacenter Edition (with unlimited virtualization) added in. The vStart 50v does not include vSphere software and costs $49,900 for the cloudy hardware.

With the vStart 1000v and 1000m stacks announced today, Dell is shifting from rack servers to its new PowerEdge 12G blade servers while also moving to higher-end Force 10 Networks switches and higher-end Compellent disk arrays to make a more scalable cloudy box for running either ESXi or Hyper-V hypervisors.

The vStart 1000 starts with one M1000e chassis and come with configurations with eight or sixteen of Dell's M620 blade servers. Dell puts in two PowerEdge R620 rack servers to be used as management consoles for Systems Center or vCenter, depending on the hypervisor you choose, and to run Dell's Virtual Integrated System (VIS) management tools.

The VIS stack includes Advanced Infrastructure Manager, an out-of-band physical and virtual server, storage, and network provisioning and management tool that Dell first OEMed in September 2009 from Scalent and then took control of through an acquisition in July 2010.

The VIS stack now includes VIS Director, a performance management and planning module, and VIS Creator, a self-service catalog for end users to deploy virty software stacks that Dell has OEMed from an unnamed third party. El Reg did ask, but Ben Tao, marketing director at Dell, said the company was not telling where this self-service catalog came from.

The vStart 1000 puts a single M1000e blade chassis in the rack with eight two-socket M620 servers, which are based on Intel's latest Xeon E5-2600 processors. The full rack also has a Dell Force 10 S4810, which has 48 ports running at 10GE speeds and four uplinks providing 40GE of bandwidth, and a Force 10 S55 switch, which has 44 Gigabit Ethernet fixed RJ45 ports and four additional Gigabit Ethernet copper or fiber SFP ports. (Presumably this Gigabit Ethernet port is used to link the management servers to the nodes.)

The rack is also stuffed with a dual-controller configuration of Dell's Compellent Series 40 storage array and has a Brocade 5100 top-of-rack Fibre Channel switch to link the server nodes to the Compellent arrays. The rack has all of the included power distribution units, but does not include UPS or KVM switches, if you want to use those.

If you fill that chassis out with 16 half-height M620 blades, you can support around 500 virtual machines, says Tao. Add a second M1000e enclosure and load it up with another 16 blades, and you can do around 1,000 VMs.

On the vStart 1000m, you get Windows Server 2008 R2 Datacenter Edition on the server nodes, which includes unlimited virtualization of Windows licenses and Hyper-V bundled in. The VIS creator is optional.

On the vStart 1000v, you get trial editions of the vSphere hypervisor stack and the vCenter management console software from VMware. Pricing was not divulged and will not be until the new bundles ship in July, but you would expect the vStart 1000 setups to be a bit more pricey compared to the vStart 50, 100, and 200 machines. Perhaps as high as $1.5m for a stack supporting 1,000 VMs, or around $1,500 per VM. But, Dell could – and probably should – keep the price in line with the other machines, at around $1,000 per VM if it wants to make customers happy.

What will really make customers happy, says Tao, is the fact that it is six times faster getting from order to running VMs using a vStart machine than buying pieceparts from Dell and Microsoft or VMware and slapping them together yourself.

Converged blade data center

Because there are not enough different ways to say converged infrastructure or engineered systems or hardware stacks in the IT racket these days, the marketeers at Dell have come up with a new one: the Converged Blade Data Center.

This setup is all about raw compute and storage capacity and putting everything into a blade form factor for the maximum amount of density and integration. The server nodes are blades, the storage nodes are blades, and the switches are blades.

The compute element in the CBDC (a place where punk servers show how badly they play their instruments?) is the PowerEdge M420, which was announced in mid-May and which is a two-socket machine based on Intel's Xeon E5-2400 processor.

This is the cut-down version of the E5 with one fewer QuickPath Interconnect port between the sockets and fewer memory slots and more limited I/O expansion; the E5-2400s also sport lower prices and are perfectly fine for a lot of jobs, particularly when you are trying to get the best bang for the buck rather than more bang.

The M420 is a quarter-height, single-wide blade that can take two standard Xeon E5-2400 parts and support up to 192GB of main memory. You can put up to 32 of these M420 blades in a single 10U M1000e chassis, but for the CBDC, Dell is only putting 24 blades in the chassis so it can leave room for two double-wide, half-height EqualLogic PS-M4110 blade array storage blades, each capable of holding 14TB of disks and linking to the blade through a 10GE switch fabric.

Dell is slamming in its new Force 10 MXL blade switch, which was announced in April. The Force 10 MXL has six 40GE ports, which can be split down to 24 10GE ports (or a mix of 40GE and 10GE ports as conditions dictate); it is a stackable blade switch, allowing for up to six of the MXLs to be linked together within a single chassis and managed as a single logical switch.

Dell's Converged Blade Data Center stack

Dell's Converged Blade Data Center stack

The big advantage with this CBDC configuration is the typical thing you have heard about blade servers for the past decade: ease of integration, reduction in cabling, and higher density computing.

Tao says that a single chassis can provide around 384 virtual machines if you decide to use it for a private cloud, and do it all in a 10U chassis. That is about three times the density of a current vBlock setup based on Cisco Systems' Unified Computing System blade servers. And, you can set the whole thing up with three cables instead of the 20 to 30 you would need with a Cisco setup, according to Tao.

The Converged Blade Data Center will be available in August. Pricing has not yet been set. The wonder is why there is not a vStart 1500v and 1500m setup based on this all-blade setup. Perhaps there will be at some point. There's certainly nothing stopping customers from buying Dell's VIS management tools and either Hyper-V and ESXi to make this raw iron into a private cloud.

Bootnote: An intrepid reader of El Reg points out that Dell did an OEM deal with cloudy startup DynamicOps in September 2010 and that bits of this startup's Cloud Automation Manager are what make up VIS Creator. DynamicOps was spun out of CreditSuisse, which developed the self-service portal for internal uses. It is a wonder that Dell hasn't just bought DynamicOps already. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like