Original URL: https://www.theregister.com/2008/10/03/lsst_jeff_kantor/

Mapping the universe at 30 Terabytes a night

Jeff Kantor, on building and managing a 150 Petabyte database

By Matt Stephens

Posted in Software, 3rd October 2008 19:15 GMT

Interview It makes for one heck of a project mission statement. Explore the nature of dark matter, chart the Solar System in exhaustive detail, discover and analyze rare objects such as neutron stars and black hole binaries, and map out the structure of the Galaxy.

The Large Synoptic Survey Telescope (LSST) is, in the words of Jeff Kantor, LSST data management project manager, "a proposed ground-based 6.7 meter effective diameter (8.4 meter primary mirror), 10 square-degree-field telescope that will provide digital imaging of faint astronomical objects across the entire sky, night after night." Phew.

When it's fully operational in 2016, the LSST will: "Open a movie-like window on objects that change or move on rapid timescales: exploding supernovae, potentially hazardous near-Earth asteroids, and distant Kuiper Belt Objects.

"The superb images from the LSST will also be used to trace billions of remote galaxies and measure the distortions in their shapes produced by lumps of Dark Matter, providing multiple tests of the mysterious Dark Energy."

In its planned 10-year run, the LSST will capture, process and store more than 30 Terabytes (TB) of image data each night, yielding a 150 Petabytes (PB) database. Talking to The Reg, Kantor called this the largest non-proprietary dataset in the world.

Data management is one of the most challenging aspects of the LSST. Every pair of 6.4GB images must be processed within 60 seconds in order to provide astronomical transient alerts to the community. In order to do this, the Data Management System is composed of a number of key elements. These are:

So what's a time-critical system of this magnitude written in?

The data reduction pipelines are developed in C++ and Python. They rely on approximately 30 off-the-shelf middleware packages/libraries for parallel processing, data persistence and retrieval, data transfer, visualization, operations management and control, and security. The current design is based on MySQL layered on a parallel, fault-tolerant file system.

Kantor added: "We are also prototyping with other open-source and proprietary databases, as well as with a Map Reduce-based approach similar to that in use at Google. We are also participating in a startup venture to create a new database engine specifically oriented at large-scale databases, especially those that contain scientific and image data."

The data will be available in formats compliant with the Virtual Observatory standards, as FITS images, and as RGB images (or something equivalent).

Providing 30TB of data a day, to each and every potential user, sounds about as easy and practical as juggling elephants one-handed.

Kantor explained: "At 1Gbps, 30TB would take 67 hours to download (without overhead). That is why the Data Access Centers exist, so users can access the data and analyze it without downloading large subsets. Rather than move the data to the processing code, we permit you to process the data nearby."

One wonders how an automated system could be written to discover previously unknown classes of rare objects - part of the telescope's mission statement.

How do you program clairvoyance into a data analysis system? Kantor: "There are quite a few researchers pursuing the line that one can analyze large datasets statistically, and uncover outliers and anomalies of interest. This is very much a research topic and one that several LSST partners are pursuing.

"In addition, we are designing the software with the ability to extend it to new algorithms and data types easily. There is a tradeoff between flexibility and performance and we walk that line every day in the design."

Talking of design, agile process aficionados out there will be interested to hear that Kantor and his team are using the minimalist, UML-based ICONIX Process (a subject close to this writer's own heart) for their system and software requirements and design. The teams are geographically dispersed, so the LSST models are shared using Sparx Systems' Enterprise Architect (EA) version control integration capabilities. Individual packages are added to a central version control repository and these packages are then shared by several local EA project files.

Kantor adds: "For code, our development environment is based on the open source trac tool integrated with subversion for version control. This provides a source repository and browser, ticket system, and documentation wiki."

Measuring the success or failure of a project as massive and wide-ranging as the LSST, which will run over such a long period of time, could prove difficult. How would you know that the project is providing value for money, useful information?

Kantor agreed: "That is always a tricky question for 'big science' projects. Typically it is measured in terms of professional papers created from the survey. Additional metrics have to do with educational impact and public impact.

"The Hubble Space Telescope gave us pictures of distant objects in the Universe that changed most people's perception of and interest in astronomy. LSST has the same potential. How do you measure that?"®