This article is more than 1 year old

Oracle fires up Virtual Compute Appliance for infrastructure clouds

One throat to choke – perhaps at a premium

If you are looking for one throat to choke for a virtualized x86 server stack, the right one is tough to find. But Oracle wants you to wrap your hands around Larry Ellison's neck, and is betting that its new Virtual Compute Appliance works so well and is such a good bargain that you won't squeeze.

The system makers that have welded together servers, switches, and storage do not control their own operating systems or hypervisors for those servers, even if they do have their own Unix or proprietary variants. And the dominant operating system suppliers on x86 iron – Microsoft and Red Hat, with a bit of action coming from SUSE Linux and Canonical – who also have hypervisors and control freaks to make them behave do not make and support their own systems.

Oracle, says Adam Hawley, senior director of product management for virtualization products, will be the first tier-one system, operating system, and hypervisor maker who can claim to control a complete server virtualization stack.

The Virtual Compute Appliance is not a member of the Exa family, which is made up of clusters of x86 servers from Oracle tuned up to run parallel databases or middleware software from Oracle atop its own variants of Linux, which is derived from Red Hat Enterprise Linux but with a homegrown Linux kernel that Oracle patches and supports without breaking application compatibility with RHEL. The Exa line is specifically geared for "extreme performance," while the appliances for running databases, Hadoop, and now hypervisors and virtual machines are more general-purpose.

The new infrastructure cloud appliance is based on Oracle's X3-2 servers, which are equipped with two of Intel's current "Sandy Bridge-EP" Xeon E5 processors, which in this case have eight cores running at 2.2GHz. Each server node in the cluster is configured with 256GB of main memory running at 1.6GHz, plus two mirrored 900GB disk drives and a dual-port 40Gb/sec InfiniBand adapter card. There is a Gigabit Ethernet management port on the server node, as well.

The setup comes with two base nodes for storing virtual server slices in a rack, expandable to a total of 25 nodes. There are two more identical server nodes in the rack, which are mirrored for high availability and are used to run Oracle's management software.

The virtual servers in the appliance are not meant to store the virtual machines on the node's puny disk drives, but rather link out to a ZFS 7320 Storage Appliance with 24 of those 900GB disk drives. Each dozen of the drives are put in a RAID group and then mirrored, and after all of the Oracle software is on there, there is about 6TB left over for virtual machine images.

As you want to load up more VMs, you slide in another ZFS 7320 appliance, or if you want to use non-Oracle storage, anything that speaks iSCSI or NFS can link back to the virty cluster. (Hawley doesn't suggest that you use third-party storage, of course.)

The Virtual Compute Appliance

The Virtual Compute Appliance

The Virtual Compute Appliance also includes two Fabric Interconnect F1-15 I/O director switches, which Oracle got through its acquisition of Xsigo Systems in July 2012. This director switch, which is actually based on 40Gb/sec InfiniBand ASICs from Mellanox Technologies, has sixteen 10Gb/sec Ethernet ports coming out of the rack to handle so-called "north-south" traffic from the servers to the outside world.

Each server has a 40Gb/sec adapter that presents virtualized InfiniBand and Fibre Channel adapters to link the servers to each other and to the core network and, in the case of Fibre Channel, to link out to external storage arrays linked to the I/O director switch.

The Fibre Channel modules are not yet supported, but InfiniBand links to the ZFS 7320 are. There are enough InfiniBand ports to handle 85 server nodes, which is more than three racks, but this configuration is not going to be supported initially, either, says Hawley. The setup includes a 36-port InfiniBand spine switch and two 24-port Gigabit Ethernet switches for management.

That's the hardware in the Virtual Compute Appliance. The software stack includes Oracle VM, Big Larry's implementation of the Xen hypervisor, as well as Oracle VM Manager, the control freak that is used to manage and monitor the hypervisor and to create and manage virtual machine images. The Oracle SDN software, which runs on the Xsigo I/O director switches and which creates virtual network links between VMs and physical server nodes in the cluster, is also tossed in.

Rather than use the open source OpenStack or CloudStack cloud control freaks to manage the virtual server clusters, Oracle has cooked up its own shiny new control software, which is given the perfectly obvious name of Oracle Virtual Compute Appliance Controller Software. This is the orchestration and automation layer for a single rack of virtual servers, and it has a GUI interface as these tools tend to these days. If you want to manage multiple racks, then Oracle's Enterprise Manager Cloud Control hooks into the new compute appliance control software.

Oracle supports Linux, Windows, and the x86 variant of its own Solaris Unix as virtualized guests on the Virtual Compute Appliance, and the company is obviously hoping that customers stick with its own Oracle Linux and Solaris distributions. It is not clear if there will be any price incentives for this.

Pricing information for the infrastructure cloud appliance was not made available, but will be on the official Oracle price list when it ships in September. "We will be very aggressive and right there with them," Hawley says of the competition, which includes the Unified Computing System machines from Cisco Systems and their vBlock and FlexPod stacks, Virtual System and CloudSystem from HP, Flex System from IBM, and Active Infrastructure from Dell.

Moreover, Hawley says that while price is important, it will not be a determining factor. One way of interpreting that is that Oracle has a clever templating system and has automated the configuration of the cluster so you can fire it up and it configures itself in under an hour, meaning that Big Larry thinks he can charge a premium.

And there is, of course, that one throat to choke. How much would you pay for that? ®

More about

TIP US OFF

Send us news


Other stories you might like