Feeds

Orion delivers first 'personal cluster' workstation

Deskside Linux workhorse

  • alert
  • submit to reddit

Next gen security for virtualised datacentres

There is something rather antiquated about many of the server clusters used today by engineers, scientists and media fiends.

Their high-powered computing machines are often cobbled together over many days either in a makeshift data center or an expensive room with raised floors and penguin-friendly cooling. Cables overwhelm the space. The machines boot slowly. And then when the whole system is finally up and running, the users squabble over who can use part of the cluster for what - renting compute time like the earliest mainframe customers.

That's why Orion Multisystems has made a move to modernize the cluster. The small California company has bested the idea of a personal computer by delivering a personal cluster - a 96-processor workhorse that fits underneath a desk, plugs right into the wall and takes less than two minutes to boot.

"There was an explosion of interest in what could be done with high performance compute clusters, but overtime it became impossible to talk about a standard cluster," said Colin Hunter, CEO of Orion. "Even though there has been tremendous development in what could be considered standard applications, there has never been a standard product for individual technical or creative professionals."

Orion will deliver two flavors of its workstation. A low-end system - shipping Oct. 1 - will have 12 processors, up to 24GB of memory and up to 1.4TB of storage. The high-end box - shipping in the fourth quarter - will have 96 processors, up to 192GB of memory and up to 9.6TB of storage. The small box starts under $10,000, while the larger box comes in under $100,000.

Almost amazingly for a cluster, the boxes have a simple on/off switch. The Orion engineers spent months tuning their Linux operating system and a host of technical computing and graphics software to boot up nearly as fast as a standard PC operating system.

The Orion workstations also run on Transmeta's 1.5GHz Efficeon processors. These low-powered chips coupled with numerous power-sensitive components allow the small cluster to consume less than 200 watts and the large cluster to consume less than 1,500 watts. Just for a bit of perspective, a typical microwave easily consumes 1,200 watts.

Picture of the Orion 96-processor cluster sitting under an office deskBy making it possible to plug the workstation in a standard outlet, Orion has opened up its systems to myriad users not satisfied with what standard one- and two-processor workstations or custom clusters can deliver. Scientists in the bio-tech field can now place a cluster right in their labs. Media companies can now give their technicians almost supercomputer class power at their desks, and engineers can now do complex modeling when they're ready instead of fighting for time on the company cluster.

Blast from the past

Orion has basically updated what companies such as SGI and Sun Microsystems once delivered in the 1990s by combining standard components to form a high-performing beast.

"You can spend a little more and get an incredible performance boost," said Horst Simon, director of the National Energy Research Scientific Computer Center at the Lawrence Berkeley National Laboratory. "It's kind of a back to the future idea if you think about what people would once spend for a powerful Sun workstation for their engineers and scientists."

There, however, are a couple of concerns surrounding Orion. For one, the company has not yet fully tested the 96 processor system. That box basically combines eight of the 12-processor systems on a shared, high-bandwidth midplane. All of the servers then link into a 10gig backplane and every node can talk to another node at 1 gig per second. The Orion staffers don't expect any problems when linking the systems together, but the fact of the matter is that the box has not yet been done.

Another possible issue is the use of Transmeta processors. There are many customers out there who rightly or wrongly look for the Intel or AMD brand for data center-class products. Transmeta has yet to prove that it can deliver new processors at a steady clip - a factor which could gate how fast Orion can advance its systems.

But it's not surprising that Transmeta ended up as the processor of choice for Orion.

Green computing

The early foundations for the system can be traced back to the Green Destiny cluster developed by three engineers as Los Alamos National Laboratory (LANL). That system combined up to 240 RLX blade servers powered by Transmeta chips into a super computer that could fit in a standard closet. One of the most attractive features of the cluster was that it could run without failure in a hot, dusty New Mexico warehouse instead of a super-cooled, specialized facility.

Chris Hipp, a blade server pioneer and co-founder of RLX, worked closely with LANL on Green Destiny and now serves as the VP in charge of applications at Orion. Other Orion co-founders include Hunter who was VP of engineering at Transmeta and Ed Kelly, who once served as CTO at Transmeta and who contributed to early SPARC processor designs at Sun.

The three individuals settled on the Transmeta chips, seeing them as the best performers per watt out there. They say Orion is not married to Transmeta, and one could envision the company picking up low-power Opteron or Xeon chips down the line.

In the near-term, Orion has all the look and feel of a computing pioneer. Besides modernizing the cluster, it is also making use of another growing trend in computing by coupling numerous low-power processors together instead of relying on energy hungry chips that can process a single software thread well.

All of the major chipmakers, including Intel, AMD, Sun and IBM, have discussed a move to low-power multicore chips. These chips place anywhere from four to eight processor cores together and link them up to loads of memory. This strategy makes up for a gap between processor and memory performance, as it allows each core to stay busy instead of having a one core chip cranking away on data and then wasting time waiting for memory. Sun is expected to lead the way with the first "radical" multicore design in 2006 with others following with more or less radical designs from 2007 on.

While Orion hasn't put numerous cores on a single die, it has found a way to link many processors together and surround them with plenty of memory. And, in typical start-up fashion, it has pulled this off well ahead of the big boys.

So far, Orion has been reluctant to say who exactly is beta testing its systems, although at least 10 companies have their hands on the 12-processor box.

Given the popularity of Linux clusters, it would seem that Orion has come up with a very interesting design at the right time. Trends in computing often start out in the labs and slowly make their way to corporate data centers or desks.

IBM, Sun, HP and Dell have all worked to build and ship ready-to-use clusters, hoping to simplify the cluster building process for customers. Those systems, however, tend to edge more toward the supercomputer realm where a single task can be worked on day and night. Orion is now taking that approach to a personal level. It's trying to satisfy the ever-present need among some users for a faster, better box.

In total, Orion has added a new level of ease-of-use to clusters and pushed the boundaries of green computing. It has delivered exactly what you hope for from a start-up, especially in the hardware market, by capitalizing on what should have been an obvious trend and producing a system that makes sense right now, as opposed to futuristic kit. The company also managed to combine commodity parts with strong in-house engineering to out invent larger players but keep its systems affordable.

We'll be keeping a close eye on the firm over the coming months and will bring updates on how the personal cluster idea is coming along. ®

Related stories

HP users decry Itanium, SAP issues and bad English
In the chair: VMware's Ed Bugnion
Sun slips 'workstations that must not be named' on Web
Los Alamos lends open source hand to life sciences
Transmeta blades power landmark supercomputer breakthrough
Supercomputer eats more power than small city

Gartner critical capabilities for enterprise endpoint backup

More from The Register

next story
The Return of BSOD: Does ANYONE trust Microsoft patches?
Sysadmins, you're either fighting fires or seen as incompetents now
Microsoft: Azure isn't ready for biz-critical apps … yet
Microsoft will move its own IT to the cloud to avoid $200m server bill
Shoot-em-up: Sony Online Entertainment hit by 'large scale DDoS attack'
Games disrupted as firm struggles to control network
Cutting cancer rates: Data, models and a happy ending?
How surgery might be making cancer prognoses worse
Silicon Valley jolted by magnitude 6.1 quake – its biggest in 25 years
Did the earth move for you at VMworld – oh, OK. It just did. A lot
Forrester says it's time to give up on physical storage arrays
The physical/virtual storage tipping point may just have arrived
prev story

Whitepapers

Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
5 things you didn’t know about cloud backup
IT departments are embracing cloud backup, but there’s a lot you need to know before choosing a service provider. Learn all the critical things you need to know.
Why and how to choose the right cloud vendor
The benefits of cloud-based storage in your processes. Eliminate onsite, disk-based backup and archiving in favor of cloud-based data protection.
Top 8 considerations to enable and simplify mobility
In this whitepaper learn how to successfully add mobile capabilities simply and cost effectively.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?