Feeds

Petabyte-chomping big sky telescope sucks down baby code

Beyond the MySQL frontier

Security for virtualized datacentres

Robert Heinlein was right to be worried. What if there really is a planet of giant, psychic, human-hating bugs out there, getting ready to hurl planet-busting rocks in our general direction? Surely we would want to know?

Luckily, big science projects such as the Large Synoptic Survey Telescope (LSST), which (when it's fully operational in 2016) will photograph the entire night sky repeatedly for 10 years, will be able to spot such genocidal asteroids - although asteroid-spotting is just one small part of the LSST's overall mission.

Two years ago we spoke to Jeff Kantor, LSST data management project manager, who described the project as "a proposed ground-based 6.7 meter effective diameter (8.4 meter primary mirror), 10 square-degree-field telescope that will provide digital imaging of faint astronomical objects across the entire sky, night after night."

I caught up with Jeff again a couple of weeks ago, and asked him how this highly ambitious project is progressing. "Very nicely" seems to be the crux of his answer.

It might not make for the most dramatic of headlines but given the scale and complexity of what's being developed, this in itself is a laudable achievement. In Jeff's words: "First, we have to process 6.4GB images every 15 seconds. As context, it would take 1,500 1080p HD monitors to display one image at full resolution.

"The images must go through a many-step pipeline in under a minute to detect transient phenomena, and then we have to notify the scientific community across the entire world about those phenomena. That will take a near real-time 3,000-core processing cluster, advanced parallel processing software, very sophisticated image processing and astronomical applications software, and gigabit/second networks.

"Next, we have to re-process all the images taken since the start of the survey every year for 10 years to generate astronomical catalogs, and before releasing them we need to quality assure the results."

That's about 5PB of image data/year, over 10 years, resulting in 50PB of image data and over 10PB of catalogs. The automated QA alone will require a 15,000-core cluster (for starters), parallel processing and database software, data mining and statistical analysis, and advanced astronomical software.

They now have a prototype system of about 200,000 lines of C++ and Python representing most of the capability needed to run an astronomical survey of the magnitude typically done today. Next, they have to scale this up to support LSST volumes. According to Jeff: "We hope to have all of that functioning at about 20 per cent of LSST scale of the end of our R&D phase. We then have six years of construction and commissioning to 'bullet-proof' and improve it, and to test it out with the real telescope and camera."

The incremental development and R&D mode the team is following could be called agile, although this is agile on a grand scale. Each year or six months, they do a new design and a new software release, called a Data Challenge. Each DC is a complete project with a plan, requirements, design, code, integration and test, and production runs.

Lessons learned

The fifth release just went out the door, and they've completely re-done their UML-based design third times with the lessons learned from each DC. They're using Enterprise Architect to develop each model, following a version of the agile ICONIX object modeling process tailored for algorithmic (rather than use case driven) development. I've co-authored a book on the ICONIX process, Use Case Driven Object Modeling with UMLTheory and Practice, here.

ICONIX uses a core subset of the UML rather than every diagram under the sun, and this leanness has allowed them to roll the content into a new model as a starting point for the next DC.

Jeff explains: "After each DC, we also extract the design/lessons learned from the DC model to the LSST Reference Design Model which is the design for the actual operational system. That last model is also used to trace up to a SysML-based model containing the LSST system-level requirements."

Security for virtualized datacentres

More from The Register

next story
Boffins say they've got Lithium batteries the wrong way around
Surprises at the nano-scale mean our ideas about how they charge could be all wrong
Thought that last dinosaur was BIG? This one's bloody ENORMOUS
Weighed several adult elephants, contend boffins
Europe prepares to INVADE comet: Rosetta landing site chosen
No word yet on whether backup site is labelled 'K'
City hidden beneath England's Stonehenge had HUMAN ABATTOIR. And a pub
Boozed-up ancients drank beer before tearing corpses apart
'Duck face' selfie in SPAAAACE: Rosetta's snap with bird comet
Probe prepares to make first landing on fast-moving rock
Archaeologists and robots on hunt for more Antikythera pieces
How much of the world's oldest computer can they find?
prev story

Whitepapers

Secure remote control for conventional and virtual desktops
Balancing user privacy and privileged access, in accordance with compliance frameworks and legislation. Evaluating any potential remote control choice.
Saudi Petroleum chooses Tegile storage solution
A storage solution that addresses company growth and performance for business-critical applications of caseware archive and search along with other key operational systems.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.
Providing a secure and efficient Helpdesk
A single remote control platform for user support is be key to providing an efficient helpdesk. Retain full control over the way in which screen and keystroke data is transmitted.