Feeds

Purdue puts HPC cluster in HP PODs

Boilermakers of a different kind

Internet Security Threat Report 2014

Purdue University, the engineering school known by the nickname "The Boilermakers", has tapped Hewlett-Packard to build a 1,000-node HPC cluster for scientific research. Rather than put the cluster into a traditional data center, Purdue is stuffing the machinery into HP's POD containerized data center.

HP gave El Reg a peek of the Performance Optimized Datacenter back in July 2008, and started shipping it last June. The PODs are based on 20 foot and 40 foot metal shipping containers, and the 40-footer has enough room to house 3,250 1U rack servers or disk arrays with 12,000 3.5-inch disk drives – roughly the equivalent of 4,000 square feet of data center floor space. The power and cooling systems HP sells with the PODs can deliver up to 27 kilowatts per rack and keep the gear cool too.

Purdue is to buy an HP Cluster Platform 4000, a preconfigured cluster using 10 Gigabit Ethernet interconnect and based on the company's ProLiant DL165z G7 servers. The DL165z G7 machines are based on Advanced Micro Devices' "Magny-Cours" Opteron 6100 processors, which sport a dozen cores per socket. The servers have two sockets and offer up to 256 GB of main memory in their 1U chassis.

You can cram the 1,000 nodes that Purdue is using to build its cluster, which is nicknamed "Rossman" after Michael Rossman, Purdue's famous physicist and microbiology researcher, into 24 racks and have plenty of room left over for a wide-screen LCD TV and a couple of big ole couches to watch football on Saturday afternoon when the Boilermakers are on the road.

John Campbell, associate vice president of academic technologies at Purdue said the main reason why the university went for the POD approach is time and money. The school did not want to put up the funds to build a new data center or wait years to get it done. Campbell says HP can get the containerized data center up and running in a matter of months and at a fraction of the cost. Purdue did not elaborate on just how much money it was saving. Campbell said in an email to El Reg that the university has one 40 footer now, is pouring concrete for a second one, and has room for four others in the containerized data center is has set up.

In addition to the new POD-based cluster, Purdue has just installed another cluster, nicknamed "Coates" after Ben Coates, the former head of electrical engineering and founder of the compsci program at the university. That cluster was noteworthy in that it was the first cluster to rank on the Top 500 supercomputing list that was based on 10 Gigabit Ethernet interconnect.

The Coates cluster is based on a mix of quad-core Opteron processors in a mix 989 HP ProLiant server nodes running Red Hat Enterprise Linux, PBSPro for cluster management, and a Condor grid system that hails from an open source project at Big Ten rival, the University of Wisconsin. The Condor grid is linked to all the computing facilities at Purdue and, true to its name, scavenges cycles from over 20,000 processors. ®

Beginner's guide to SSL certificates

More from The Register

next story
NSA SOURCE CODE LEAK: Information slurp tools to appear online
Now you can run your own intelligence agency
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
NASA launches new climate model at SC14
75 days of supercomputing later ...
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
BOFH: WHERE did this 'fax-enabled' printer UPGRADE come from?
Don't worry about that cable, it's part of the config
Stop the IoT revolution! We need to figure out packet sizes first
Researchers test 802.15.4 and find we know nuh-think! about large scale sensor network ops
SanDisk vows: We'll have a 16TB SSD WHOPPER by 2016
Flash WORM has a serious use for archived photos and videos
Astro-boffins start opening universe simulation data
Got a supercomputer? Want to simulate a universe? Here you go
prev story

Whitepapers

Go beyond APM with real-time IT operations analytics
How IT operations teams can harness the wealth of wire data already flowing through their environment for real-time operational intelligence.
The total economic impact of Druva inSync
Examining the ROI enterprises may realize by implementing inSync, as they look to improve backup and recovery of endpoint data in a cost-effective manner.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Mitigating web security risk with SSL certificates
Web-based systems are essential tools for running business processes and delivering services to customers.