Sun says Big Iron still matters as it pushes N1
When the companies that sell and support low-end servers and operating system platforms that really only scale well on commercial workloads to four or eight processors (the latter if you are lucky) talk about the future, they talk about clustering,Timothy Prickett Morgan
All of the major server and OS vendors have been trying for more than a decade to lash many inexpensive PC servers together to create the functional equivalent of the big dog, big iron SMP server running Unix or a proprietary operating system. The databases and middleware that drive all of these clustered applications have certainly matured and are useful for an increasing number of mission-critical applications. But, say executives at Sun, don't count out those big dogs quite yet, and don't think that Sun's N1 strategy is just limited to getting big racks of skinny servers to play nicely together like a big SMP machine.
That big iron is not going away was the main message of a conference call last week with Clark Masters, executive vice president for Enterprise System Products at Sun.
Masters, who came to Sun with the 1996 purchase of the Starfir" Enterprise 10000 server from SGI (which got it from Cray Research), has a lot of different spheres of control at Sun. And he has a vested interest in preaching that big iron is not dead just because companies are discovering the joys of clustering and grid computing involving relatively inexpensive servers.
Masters is in charge of every Sun server product that has a price tag of $100,000 or more (which means the Sun Fire 3800 through the Sun Fire 15000), all high performance computing (HPC) and visualization initiatives, and the Integrated Products Group, which creates the prefabbed, prestacked customer-ready systems and which creates references architectures for various industry verticals where Sun plays.
Two or three years ago, during the height of the dot-com and Internet boom, Masters said, all that customers could talk about was trying to stay in front of burgeoning requirements for processor, memory, and storage bandwidth and they were not as concerned as they had been in the past about costs. So Sun and other vendors concentrated on building big iron.
These days, he says, all customers are talking about is lowering the total cost of ownership, dropping absolute costs, and trying to do more with less money and less iron. This doesn't mean that big servers are doomed, it just means that the role of big servers has to change. It also has meant that Sun has had to move out of the core financial services and telecommunications sectors which accounted for 60% or more of its sales during the boom years, and Masters says that Sun has done this by expanding into government, retail, and health services. What he didn't say, and what seems obvious, is that companies outside of the financial services, telecom, and dot-com arenas have been clamoring for cheaper, easier to manage solutions all through the boom times because, quite frankly, they missed the boom when it came to IT spending. When the bubble burst, the companies that never had deep IT pockets started getting heard because it was obvious that financial services and telecoms companies had been binge buying on IT capacity and they were not going to be doing that for quite some time again.
The key to the N1 virtualization initiative as far as big iron is concerned, says Masters, is creating "virtual blades" within the big SMP servers that Sun sells generally as database and application servers for its largest customers or as compute nodes for high-end clusters where research institutions or the technical departments of corporations want to bring hundreds of gigaflops or teraflops of computing capacity to bear on number-crunching problems. The key to N1 will be to use software containers in Solaris to allow virtual machines that look like a complete Solaris environment to applications to carve up a big box like a 72-way Sun Fire 15000 (which will be a 144-way machine with the dual-core UltraSparc-IV in the second half of 2003) into hundreds of virtual machines. IBM and HP have similar logical partitioning capabilities on their Unix boxes. If these logical partitioning capabilities work, the operational benefits of maintaining and administering one big Solaris box instead of clusters of many small Solaris boxes (which will also have software containers) may make the big box a better choice than a cluster.
The question is, of course, whether all of these initiatives will work and help stem the tide of applications off big boxes and onto distributed and less expensive machines (which usually have lower utilization and are therefore less efficient). A so-called entry server from this year has more oomph and capacity than a midrange server of three years ago, and this is a trend that has been true for decades.
"There's certainly an industry debate on this," says Masters. "If the high-end machines cannot share resources and become virtual blades, then the high end will be relegated to running only those things that cannot run anywhere else. The good news is that SMP is built to share - processors are built to share I/O and memory at the hardware level. It is much more of a challenge to create virtual blades on horizontally scaling machines that it is to do it on a vertically scaling machine."
The advent of logical partitions, virtual partitions, or software containers like those expected to be in production in Solaris 9 and the difficulty of creating clustered Windows, Linux, or Unix entry servers and applications to ride on top of them is a testament to this truth.
But Sun is not throwing all of its eggs into the same basket with N1 and virtual blades only for big iron. "Sun is chasing both approaches, and the good news is that we are positioned to win no matter what happens," said Masters. "We are investing in both, as is IBM and HP."
The point, and one that all server vendors are having drilled into their heads night and day by their customers, is that stacks and racks of monolithic servers, each running an application, is too costly an approach. "The world is by and large still one application per server," said Masters. "With N1, we will do dynamic provisioning, and we want to raise the bar and drive up the utilization and manageability of the machines." Specifically, Masters said that Sun was looking to increase the number of servers that a typical administrator can handle from 20 to 30 systems up to 300 to 500 systems, increase the amount of data a typical database administrator can manage from about one terabyte today to hundreds of terabytes, and increase server utilization rates to the 80% to 85% range, up from 10% to 15%. And perhaps most significantly, the N1 initiative wants to eliminate the kinds of expensive services that are needed to install and maintain clustered servers or partitioned servers - the kinds of places where IBM Corp and Hewlett Packard Co are hoping to make lots of money.
Sponsored: Hyper-scale data management