This article is more than 1 year old

Behind iVEC’s ‘big science’ supercomputer

Putting Australia’s West on the map

Each of its 96 nodes has two six-core Intel Xeon X5650s, one NVIDIA Tesla C2050 GPU, and 48 GB of RAM, but the SGI “Fornax” supercomputer opened late September as part of Western Australia’s Pawsey Centre project is still a test bed in some ways.

The demands of “big science” are so intensive, and the data sets so diverse across different communities, that even a "finished" project is also a development platform for new techniques and applications.

Located at the University of Western Australia, the iVEC@UWA machine is being managed and operated by iVEC, and is the second pathfinder in the Pawsey project (the other being iVEC@Murdoch, which came oneline last year). Among other things, the Pawsey Centre would ultimately operate the supercomputing facilities that will be built if Australia’s bid for the Square Kilometer Array astronomy project is successful.

However, while astronomy is the big national attention-grabber, the Pawsey Centre systems architect Guy Robinson is at pains to emphasise that Fornax supercomputer will carry workloads for a host of different scientific ventures, including geosciences, biosciences, materials science, chemistry, meteorology and climate science.

And that mix of users, Robinson told The Register, creates its own set of challenges – because while world+dog has big data sets and their correspondingly intensive computing requirements, none of the data sets looks quite the same, up close.

Astronomers, for example, need to be able to stream data into the centre at very high rates – Robinson spoke of 40 Gbps streams coming from each individual site in a radioastronomy array – and that has to be handled without disrupting the other users in the centre.

When the data has been pre-processed and stored – because, as Professor Bryan Gaensler of the Centre for All-Sky Astrophysics told The Register, the data raw has to be filtered or there would be too much to store – astronomers typically work with 6 TB files. “The signal rate coming down each individual telescope – those numbers are so frightening that I haven’t memorized them!” Robinson said.

“Geosciences, on the other hand, may generate 100 TB data sets. So we have to build systems that can make their files available. We need to create a system with massive disk resources that aren’t I/O-bound, so that users can swap files in and out in a reasonable time.”

“Astronomers might want those files to churn quickly – ten to 100 minutes – while the geosciences user might want to churn its file monthly. But when they’re actually working with the data, the geosciences users still need to be able to get the next files they want.

“A 100 TB file takes a little while to move,” Robinson said, “so we’re working with computer sciences to come up with ways we can do it quickly.”

As those problems are solved, he says, geosciences could get the chance to turn their data over much more quickly – something that could open the door to new research techniques.

“One of the reasons for the system’s multiple InfiniBand connections is that we can tune the I/O to meet the requirements of [different user communities] – or reconstruct the file systems to suit their needs,” Robinson said.

The first of the dual InfiniBand networks allows each node to access the global 500 TB filesystem, while the second allows nodes to access the local disks on neighbouring nodes. The system also allows storage traffic and MPI (message passing interface) traffic to be kept separate.

File system challenges in “big science” data

Part of the problem posed by the huge datasets that Fornax users create, Robinson said, is that different researchers will be asking different questions of the same, or similar, data.

Astronomy provides a good example. “One astronomer might only want to look at a single [radio] frequency from a dataset that has thousands. Another might want to look only at one spatial region, but analyse all the frequencies.

“Bringing back the whole file to get one little piece of it isn’t the optimal way of working,” he said.

For example, searching one spatial region for all the frequencies captured by an instrument “might involve a million random accesses” of the tape archive – something which, for all the computing and I/O grunt deployed in the machine, is still slow and inefficient.

Nor is it useful to try and create different subsets of the same data to serve the different ways users might access that data – because that will multiply the storage requirements too far, and “we can’t predict how many user types might be out there.”

“Getting your data out of the files can be quite a challenge, so we want to work smarter, rather than just accepting an old regime that focuses simply on moving lots of data, very quickly,” he said.

It’s emblematic of the way the centre hopes to focus on working with computer science departments in Western Australia’s university community to solve problems that might otherwise get in the way of the science – without the scientists themselves having to divert too much of their own efforts into solving computing problems.

As Robinson noted, the scientist isn’t rewarded for spending six months solving problems of data access issues that might only get him or her to the “real” problem they’re trying to solve. They should be able to devote themselves to the problems in front of them, with the underlying computer facilities as invisible as possible.

“They (scientists) don’t get judged on figuring out a new way to use a filesystem or get better I/O.

“So we’re always looking at how to come up with a system design, to get a set of components on ‘day one’ so that we can change the configuration as new requirements emerge.”

The pace of change

And that defines one of the greatest challenges Robinson identified in buying Fornax: how to specify a system that stays useful through a lifetime of three to five years.

“The pace of change is a handicap, in one sense. If you look at the technical offerings and the configuration challenges – things move so rapidly.

“So the question is ‘how do I buy something today, with a lifetime or two or three or five years, so that I don’t have to change the system and the way people work every year?’”

That makes flexibility part of the DNA of iVEC, he said, with constant attention on the relationship between how the system is designed as a whole compared to how things can be configured on each individual node or for each individual user.

Regardless of the outcome of the Square Kilometer Array bid, there’s still more big computing to come in WA, with a 1,000 square meter computing space facility hosting a 50 petabyte data store due to come on line next year.

Networking in the Fornax facility is provided by a Cisco Nexus 7009 switch at the University of Western Australia, providing Layer 3 services; two Cisco Nexus 5548 switches providing Layer 2 connectivity to the SGI system front-end; with eight channel wavelength division multiplexers providing multiple 1 Gbps and 10 Gbps connections into iVEC’s Perth metro network.

iVEC is a joint venture between CSIRO and the four Western Australian universities – the University of Western Australia, Murdoch University, Curtin University and Edith Cowan University. Its facilities are spread across the Australian Resources Research Centre, the University of Western Australia and Murdoch University.

iVEC is responsible for managing the $AU80 million Pawsey Centre, which is due to come online in 2013. ®

More about

TIP US OFF

Send us news


Other stories you might like