Feeds

DataDirect plunks $100m on storage array disguised as a super

File servers running app code? So mad it might just work

Internet Security Threat Report 2014

Supercomputer storage supplier DataDirect Network is setting up a $100m exascale computing lab.

The goal is to hit one quintillion floating-point operations per second, and DDN hopes this mind-boggling level of number-crunching will be reached by 2018. The company is going to direct its funds to:

  • Accelerating IO with a new file system, plus new middleware and storage tiering. This is needed to achieve "million-way application CPU parallelism".
  • Merging compute, network and storage to put "pre-processing and post-processing routines natively within the storage infrastructure".
  • Improving energy efficiency

About that last point, DDN states:

With the emergence of storage-class memory and software tools, infrastructures can be built with fewer components compared to today’s disk-based technologies. These initiatives will serve to significantly reduce hardware acquisition costs but will also make data centres much more space and power efficient by reducing storage footprint by more than 75 per cent.

How will this footprint shrink benefit a storage-array supplier? The only conceivable way is executing exascale apps inside the data stores, which relates to the second point above, a converged compute, network and storage infrastructure.

This implies that DDN has to get compute and network elements into its existing storage stack, either directly or via partnership. DDN also wants to get Big Data analytics software into its arrays, again through ownership or partnership.

Privately held DDN must have healthy revenues to prop up an average $16.6m-per-year spend on research and development. The company has been selected by Intel to collaborate with the chip giant in Lawrence Livermore National Security's FastForward programme, which is sponsored by the US Department of Energy and investigates extreme-scale computing.

The company said its efforts will "focus on evolving the state-of-the-art in parallel file systems, including the Lustre open-source parallel file system, as well as more tightly integrating compute and storage platforms to achieve greater efficiency and information insight".

This confirms DDN's future is all about running apps in its arrays that process massive amounts of data.

DDN said:

The storage and IO research and development subcontract will focus on three main areas that together cover the exascale IO stack from top-to-bottom. Included in the stack is a new storage interface that tightly integrates with the HDF5 scientific data library and data model, a next-generation flash-optimised storage tier designed to accelerate peak IO loads in HPC environments, and a massively scalable storage interface designed to support the storage foundation requirements to achieve exascale infrastructure scalability.

We are told the FastForward programme is part of a seven-lab consortium of Argonne National Laboratory, Lawrence Berkeley National Laboratory, Lawrence Livermore National Laboratory, Los Alamos National Laboratory, Oak Ridge National Laboratory, Pacific Northwest National Laboratory, and Sandia National Laboratories. Industry partners include AMD, Intel, and NVIDIA.

A final thought: could Intel be contributing any of DDN's $100m investment bill? Intriguing, no? ®

Top 5 reasons to deploy VMware with Tegile

More from The Register

next story
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
NASA launches new climate model at SC14
75 days of supercomputing later ...
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
You think the CLOUD's insecure? It's BETTER than UK.GOV's DATA CENTRES
We don't even know where some of them ARE – Maude
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
BOFH: WHERE did this 'fax-enabled' printer UPGRADE come from?
Don't worry about that cable, it's part of the config
Stop the IoT revolution! We need to figure out packet sizes first
Researchers test 802.15.4 and find we know nuh-think! about large scale sensor network ops
DEATH by COMMENTS: WordPress XSS vuln is BIGGEST for YEARS
Trio of XSS turns attackers into admins
prev story

Whitepapers

Choosing cloud Backup services
Demystify how you can address your data protection needs in your small- to medium-sized business and select the best online backup service to meet your needs.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Seattle children’s accelerates Citrix login times by 500% with cross-tier insight
Seattle Children’s is a leading research hospital with a large and growing Citrix XenDesktop deployment. See how they used ExtraHop to accelerate launch times.
5 critical considerations for enterprise cloud backup
Key considerations when evaluating cloud backup solutions to ensure adequate protection security and availability of enterprise data.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?