OpenStack's no science project, but does 'need to be glued together'
National Computing Infrastructure's Andrew Howard shares his experience running OpenStack at scale
Interview A year on from when Gartner asserted that OpenStack was a “science project”, The Register talked to the National Computing Infrastructure's Andrew Howard to see where one of Australia's biggest OpenStack deployments is at.
With 30 Petabytes of spinning rust in a 900 square metre data centre, participation in NECTAR, and connectivity via AARnet, Howard told The Register the NCI's software-defined network built on OpenStack is a life-changer for researchers.
Ten years ago, he said, most researchers worked with a high-end PC and hoped there was enough disk space to run their applications. If they moved outside the narrow parameters set by IT departments, they probably lived without a support desk.
Now, Howard says, “you run your compute and have your data stored in a national facility, you get more compute than you have in the local environment, and then you pull your results back across the network.”
“It changes where the power needs to be consumed, and gives the researcher more than they would have at a departmental or university level, and because it's funded federally, they get a scale beyond the capabilities of any particular institution.
As someone whose history stretches back to Australia's early Internet, with involvement in early OpenFlow developments, and three-and-a-half years after the NCI first investigated OpenStack, Howard has an insider's view of OpenStack's effectiveness today.
Compared to TCP/IP-based routing – which you could legitimately describe as the world's original “SDN” – the point of SDN is “much finer granularity on data flows”, Howard said.
That's a big thing for the NCI, because its scientific HPC is characterised by very big but relatively short-lived data flows, a large number of users, and facilities all over the country.