Feeds

HPC > More stories

The Tesla K20 GPU coprocessor card

Nvidia stretches Tesla GPU coprocessors from HPC to big data

GTC 2013 Graphics chip maker Nvidia has barely begun to put a dent in the traditional high performance computing segment with its Tesla GPU coprocessors and it is already gearing up to take on new markets. The next target is big data, and as with parallel supercomputing, Nvidia is hoping to get the jump on rivals Intel and AMD, which …
Open-mouthed Burmese python

Nvidia, Continuum team up to sling Python at GPU coprocessors

GTC 2013 The Tesla GPU coprocessor and its peers inside your Nvidia graphics cards will soon speak with a forked tongue. Continuum Analytics has been working with the GPU-maker to create the NumbaPro Python-to-GPU compiler. We all call it the LAMP stack, but it should really be called LAMPPP or LAMP3 or some such because it is Linux, …

Cluster padawans vie for place in Shanghai super showdown

ASC13 The 2013 Student Cluster Competition season is off to a roaring start judging by the high level of interest in the inaugural Asia Student Supercomputer Challenge (ASC13), which will kick off in Shanghai in mid-April. Right now, the judges are sorting through the 42 applications submitted by universities from a wide swath of the …

MapR smashes MinuteSort benchmark on Google Compute

While supercomputers and workstations have Linpack to rank their number-crunching performance, when it comes to sorting algorithms to rank Big Data systems, there is a collection of tests known as the Sort Benchmarks. And this year it looks like Hadoop is back on top after commercial distie MapR Technologies beat the MinuteSort …

EMC touts screeching Hawq SQL performance for Hadoop

EMC's Pivotal Initiative division made a big splash last week with the launch of its Pivotal HD distribution of Hadoop. This is not a normal Hadoop distribution, but one that takes the parallel guts of the Greenplum database and reworks them to transform the Hadoop Distributed File System (HDFS) into something that speaks …
SGI logo hardware close-up

SGI rejigs financing ahead of possible asset sale

Supercomputer and dense-pack server maker Silicon Graphics has rejiggered its credit facility with Wells Fargo Capital Finance ahead of a possible sale of intellectual property or other assets. In an 8K filing with the US Securities and Exchange Commission, SGI said that it had amended its credit agreement with Wells Fargo, …

Wheeeee... CRUNCH: DDN pushes out monster Hadoop appliance for biz

Supercomputer storage firm DataDirect Networks (DDN) has brought out a scale-out Hadoop storage array, the hScaler, to both ingest and digest Big Data using Hadoop, combining compute and storage in one platform. DDN claims this approach gets rid of the data-transfer bottlenecks that slow down Hadoop servers in the existing …
Chris Mellor, 27 Feb 2013
CSIRO test image PKS 0407-658

Australian supercomputer to use geothermal cooling

As Australia’s Square Kilometer Array Pathfinder (ASKAP) telescope takes shape, CSIRO has begun drilling in an unusual approach to cooling supercomputers. The petascale powerhouse needed by ASKAP is being built in water-short Perth, so instead of sucking nearly 40 million litres of water from the city’s supply, CSIRO plans a …

Red Hat has BIG Big Data plans, but won't roll its own Hadoop

Let's get this straight. Red Hat should package up its own commercial Hadoop distribution or buy one of the three key Hadoop disties before they get too expensive. But don't hold your breath, because Red Hat tells El Reg that neither option is the current plan. Red Hat is going to partner with Hadoop distributors and hope they …

Concurrent gives old SQL users new Hadoop tricks

Application framework specialist Concurrent has given SQL devs a free tool to get at data stored in Hadoop, without having to learn the intricacies of the trendy computational framework. The "Lingual" tool is an ANSI-standard SQL engine built on top of Concurrent's "Cascading" software, a Java application framework for Hadoop …
Jack Clark, 20 Feb 2013
The XC30 supercomputer, formerly known as Cascade

Cray readies XC30 supers for Ivy Bridge and coprocessors

Supercomputer maker Cray has trotted out its financial results for 2012, and used the occasion to talk about its plans for its current year after closing out one of the best four quarters it has put together in many seasons. On a conference call with Wall Street analysts, Cray CEO Peter Ungaro talked generally about Cray's plans …

Could you build a data-nomming HPC beast in a day? These kids can

Analysis Student cluster-building competitions are chock full of technical challenges, both “book learning” and practical learning, and quite a bit of fun too. I mean, who wouldn't want to construct a HPC rig against the clock and kick an opponent in the benchmarks? Here's what involved in the contests. Whether you’ve been following the …

Scottish uni slams on the Accelerator to boost UK boffinry

The boffins who run two big supercomputers on behalf of the UK government and academic research institutions - as well as one smaller machine aimed at industrial users - have converted those machines into an HPC utility called Accelerator. And they want you to buy core-hours on their machines instead of wasting your money …

Eager students, huge racks - yes, undergrad cluster wrestling is back

2013 promises to be the breakout year for student cluster-building competitions – the most popular high-performance-computing-related sport in the entire world. As an indication that pitting undergraduates against the clock to construct powerful number-crunchers is now a credible event, last year the International Supercomputing …

Dell: Shhh, don't tell a soul, but the PC sector ISN'T doomed...

HPC blog Dell’s move to take itself private has the tech world buzzing. There’s a lot of talk about the motives behind the deal. Some say Dell is doing it to escape the quarterly visit to the Wall Street meat grinder, where either you meet (or exceed) their expectations or get ground into a fine slurry. Going private frees Dell of public …
New Mexico's Encanto supercomputer

Day of the Trifid: VPAC fires up new HPC cluster

The Victorian Partnership for Advanced Computing (VPAC), a consortium of Australian universities, has flipped the switch on 45.9 Teraflops of a new $1.22 million HP-based cluster to cope with rising workloads from partners La Trobe University and RMIT, and its other customers. The Reg understands the 180-node, 2,880 core machine …

World's 'most green' supercomputer in red-hot battle between Intel, Nvidia

Analysis Non-profit consortium CINECA has deployed what may be the greenest supercomputer in the world at its Bologna centre in Italy. Called Eurora, the new machine claims it can perform 3,150 megaflops per watt, compared to the 2,499.44 achieved by Green-500 king the Beacon supercomputer at the National Institute for Computational …
Tim Anderson, 4 Feb 2013
Terminators

Euro boffins plan supercomputer to SIMULATE HUMAN BRAIN

The European Commission has selected the Human Brain Project (HBP) as one of its Future and Emerging Technologies and will send it up to €1.19b over ten years so it can build a supercomputer capable of simulating the human brain. The HBP wants to build a simulated brain because we don't know enough about our grey matter. The …
The Eurora supercomputer built by Eurotech and Nvidia

Italian 'Eurora' supercomputer pushes the green envelope

The "Eurora" supercomputer that was just fired up in Italy may not be large, but it has taken the lead in energy efficiency over designs from big HPC vendors like Cray and IBM. The new machine was built by Eurotech, a server maker with HPC expertise that is based on Amaro, Italy, in conjunction with graphics chip and GPU …
Stacking up chips to make tiny Daleks for the data center

Power-mad HPC fans told: No exascale for you - for at least 8 years

I recently stumbled upon a transcript from a very recent interview with HPC luminaries Jack Dongarra (University of Tennessee, Oak Ridge, Top500 list) and Horst Simon (deputy director at Lawrence Berkeley National Lab.) The topic? Nothing less than the future of supercomputers. These are pretty good guys to ask, since they’re …
An Atipa tech builds a supercomputer cluster in Kansas

Company you never heard of builds 3.4 petaflops super for DOE

Nature abhors a vacuum as well as an oligopoly, which is why upstart supercomputer maker Atipa Technologies may find itself having an easier time getting its foot into the data center door now that Cray has eaten supercomputer-maker Appro International. The company you've never heard of is Atipa, a division of PC and server …
Stanford's jet-noise simulation needed a milliion cores

Researchers break records with MILLION-CORE calculation

HPC blog Stanford’s Engineering Center for Turbulence Research (SECTR) has claimed a new record in computer science by running a fluid dynamics problem using a code named CharLES that utilised more than one million cores in the hulking great IBM Sequoia at once. According to the Stanford researchers, it’s the first time this many cores …
Stanford's jet-noise simulation needed a milliion cores

Stanford super runs million-core calculation

Stanford University engineers are claiming a record for the year-old Sequoia supercomputer, after running up a calculation that used more than a million of the machine’s cores at once. The work was conducted by the university’s Centre for Turbulence Research, seeking to get a model for supersonic jet noise that’s more …
SGI logo hardware close-up

SGI swings to a gain despite $50m in 'LMDs of profit destruction'

Supercomputer and cloud server maker Silicon Graphics ran a dog and pony show last week on Wall Street at the Needham Growth Conference, and because it was going to talk about its business the company had to release preliminary financial results for its second quarter of fiscal 2013 to avoid being busted for selective disclosure …
NOAA's Startus supercomputer

2012 in supercomputing: Ceepie-geepies, a weak ARM and the need for speed

HPC blog A lot has happened in HPC over the past year. I would say that the speeding up of development of accelerators and the rising number of hybrid CPU+GPU systems are probably the most noteworthy trends we saw in 2012. Over the next year we’re going to see even more use cases for hybrid systems, and I expect to see much wider use in …
The Corvus Supercomputer at eResearch SA

Storage glitches fell Australian supercomputers

Supercomputers at two Australian research organisations have experienced substantial downtime after glitches hit their storage area networks. West Australia’s iVEC experienced an outage, detailed here, that saw its Data Direct Networks (DDN) array and the 1152-core Fornax machine unavailable for around four days. iVEC’s …
Simon Sharwood, 15 Jan 2013
Research Data Storage Infrastructure (RDSI) logo

DDN grabs first slab of 100PB storage cloud

Data Direct Networks (DDN) will provide storage for a node of Australia's Research Data Storage Infrastructure (RDSI), a $AUD50m project aimed at creating a pool of storage the nation's researchers can use to house large quantities of data, the better to feed it into the nation's supercomputers and subject it other forms of …
Simon Sharwood, 15 Jan 2013
cray cascade mountain

Cray beats out SGI at German HPC consortium

The Höchstleistungsrechnen is going to have a different brand name and architecture on it at the Norddeutschem Verbund für Hoch- und Höchstleistungsrechnen (HLRN) supercomputing alliance in Northern Germany, now that Cray has beat out SGI for a big bad box that will have a peak theoretical performance in excess of 2 petaflops. …
IBM Watson QA Power7 cluster

Potty-mouthed Watson supercomputer needed filth filter

IBM's Watson supercomputer was smart enough to beat two human opponents on US quiz show Jeopardy!, but there is apparently some knowledge that the system is still too immature to handle – namely, the contents of the Urban Dictionary. Watson is perhaps the most sophisticated artificial intelligence computer system developed to …
China's Tianahe-1A Supercomputer

China to build ANOTHER 100 petaflops hybrid supercomputer by 2014?

We passed through the 10 petaflops barrier in the supercomputer racket last year, and the next station on the train to exaflops is 100 petaflops. China already admitted at last year's International Super Computing '12 shindig that it was working on a kicker to Tianhe-1A hybrid CPU-GPU supercomputer, with the goal of having the …
New Mexico's Encanto supercomputer

Where do old supercomputers go to die? New Mexico

Moore's Law puts supercomputers out to pasture because power – not just the cost of electricity, but the availability of juice – is the biggest constraint at the big supercomputing centers. And sometimes the lack of budget helps lock the gate, and HPC cloud computing butchers the cow. That's the case with the massive 28-rack …

Cray misses revenue targets for Q4

Veteran supercomputer-maker Cray has warned Wall Street moneymen to expect lower-than-forecast results for its 2012 financial report. Cray has been absolutely clear that the multi-petaflopping "Blue Waters" and "Titan" supercomputers being installed respectively at the University of Illinois and Oak Ridge National Laboratory …

Mellanox trips on faulty InfiniBand cables in Q4

InfiniBand and Ethernet switch and adapter maker Mellanox Technologies said after the market closed on Wednesday that the company was not going to make its revenue targets for the fourth quarter and full 2012 year – in part because of the jittery economy but mostly because of a bug in InfiniBand cables. In a statement, Mellanox …
LLNL's Sequoia BlueGene/Q super being assembled by IBM

IBM taps Red Hat for cut-throat priced Linux on big supers

Big Blue is going to Red Hat for a Linux environment for its largest supercomputers, and it is mothballing its own LoadLeveler workload manager for x86 clusters in favor of the Platform LSF control freak that it acquired a little more than a year ago. It is no surprise that IBM has chosen Red Hat Enterprise Linux 6 as the Linux …
Fujitsu K supercomputer

Big bad boxes drive explosive growth for HPC in Q3

The server market as a whole is having its issues, with both virtualization and the jittery global economy holding down physical box counts – and therefore revenues – more than they might otherwise be. But the supercomputer market is chugging right along. Networks keep getting faster, virtualization has yet to touch its boxes, …

Free HPC cluster to good home

HPC blog Interested in getting your hands on some serious system hardware for free? Well, with a few provisos, you could get your hands on a nearly new HPC cluster. First off, you'll need to be in the research game - in a US- or Canada-based academic or government lab, or some other non-profit research institution. (We’re talking about …

Vid: EXTREME computer sports: Meet the cluster war winners

SC12 Video The 2012 Student Cluster Competition (SCC) is in the books and we have all of the results, right here, right now. First, as was released earlier, China’s Team NUDT ran away with the Highest LINPACK Award, an SCC record 3.014 Teraflop/s score. Their compatriots, Team USTC, locked in second place with 2.793 Teraflops/s, and the …

HPC cluster cowboys bag overall award, haul it home to Texas

SC12 Video I caught up with Team Longhorn just before they turned in their final results for the SC12 Student Cluster Competition (SCC). As you can see in the short video, they’re pretty cool and composed, but I thought I sensed a little bit of anxiety under the surface. As it turns out, they should have been more relaxed – they were …

FINAL NIGHT: Boston kids power down cluster compo systems

SC12 Early on the last day of the SC12 Student Cluster Competition, I hit a couple of morning meetings and then went wandering around the competition area. Most of the teams had at least two or three reps in the booths monitoring the systems and just sort of hanging around. They had turned in their final scientific results the night …

Team Utah grabs Mini Iron crown at little cluster compo

SC12 Until this year, the annual SC Student Cluster Competition focused entirely on seeing how much work teams of university students could wring out of 26 amps of juice. They can use any hardware/software combination that will run the required apps; the only limitation is that their configuration has to be shipping by the time of …

Team Boilermaker: We hammer the code... not the booze

SC12 Video We had a chat with Team Boilermaker on the last day of the SC12 Student Cluster Competition. While the team did visit several of the vendor parties the night before, they assert that they didn’t overindulge, and claim that the absence of some team members is due to meetings, not hangovers. I think I buy that explanation; …

NUDT on HPC battle: Total cluster supremacy - Who needs it?

SC12 Video Team NUDT (China’s own National University of Defense Technology) was all smiles when I stopped by their booth on the final day of the SC12 Student Cluster Competition (SCC). And what’s not to smile about? They had just won the LINPACK award with their record-breaking 3 Teraflop/s score and were considered a serious …