HPC
63 TRILLION maths ops a second - in 5 inches? Mm, show me
3U rack box is a $70,000 clown car of GPUs
33
Industry upstart: You know what high-end HPC needs? More DAY-GLO
HPC blog Thought we were going to say faster compute, dintcha?
Two new supers go live in Oz
Igor pulls the big switch
World's first petaflops super dumped on scrap heap
Moore's Law, not Wile E. Coyote, brings down Roadrunner
'Super' market tops $11.1bn, propped up by massive sales
But can high-end HPC keep growing like this?
Nvidia to stack up DRAM on future 'Volta' GPUs
GTC 2013 Over 1TB/sec of memory bandwidth can definitely play Crysis
Nvidia stretches Tesla GPU coprocessors from HPC to big data
GTC 2013 'Anything a CPU can do, a GPU can do better'
Nvidia, Continuum team up to sling Python at GPU coprocessors
GTC 2013 Teaching snakes to speak CUDA with forked tongue, but not forked code
Opinion
It's a CLUSTER-OFF: Asian students prep for tense, live HPC smackdown
HPC blog The first annual Asia Student Cluster Challenge (ASCC) culminates this week with a final round of competition that brings 10 university teams to Shanghai for a live cluster-off. The teams traveling to Shanghai made it past 32 other universities vying to compete in the live finale.
Power-mad HPC fans told: No exascale for you - for at least 8 years
I recently stumbled upon a transcript from a very recent interview with HPC luminaries Jack Dongarra (University of Tennessee, Oak Ridge, Top500 list) and Horst Simon (deputy director at Lawrence Berkeley National Lab.) The topic? Nothing less than the future of supercomputers. These are pretty good guys to ask, since they’re both intimately involved with designing, building, and using some of the largest supercomputers to ever walk the earth.
FINAL NIGHT: Boston kids power down cluster compo systems
SC12 Early on the last day of the SC12 Student Cluster Competition, I hit a couple of morning meetings and then went wandering around the competition area. Most of the teams had at least two or three reps in the booths monitoring the systems and just sort of hanging around. They had turned in their final scientific results the night before, so all that was left to do was wait for the results and then tear down their systems in preparation for the trip back home.
News
Cluster padawans vie for place in Shanghai super showdown
ASC13 Spotlight on undergrads' HPC coding skills
MapR smashes MinuteSort benchmark on Google Compute
Puts Hadoop Big Data muncher back on top of Microsoft
EMC touts screeching Hawq SQL performance for Hadoop
With Hive in one claw and an Impala in the other
SGI rejigs financing ahead of possible asset sale
NUMAlink shared memory interconnect not for sale, but could be licensed
Wheeeee... CRUNCH: DDN pushes out monster Hadoop appliance for biz
Big Data ingester and digester
Australian supercomputer to use geothermal cooling
CSIRO starts drilling to ready petascale super for SKA data-crunching duty
Dell: Shhh, don't tell a soul, but the PC sector ISN'T doomed...
HPC blog Let's ditch these shareholders, shall we?
World's 'most green' supercomputer in red-hot battle between Intel, Nvidia
Analysis Uni boffins demand more bang for their watt
Euro boffins plan supercomputer to SIMULATE HUMAN BRAIN
€1.19b for in-silico experiments to build robots driven by simulated people
Italian 'Eurora' supercomputer pushes the green envelope
Besting Cray and IBM in the energy efficiency game
Company you never heard of builds 3.4 petaflops super for DOE
Nature abhors a vacuum as well as an oligopoly, which is why upstart supercomputer maker Atipa Technologies may find itself having an easier time getting its foot into the data center door now that Cray has eaten supercomputer-maker Appro International.
Researchers break records with MILLION-CORE calculation
HPC blog Stanford’s Engineering Center for Turbulence Research (SECTR) has claimed a new record in computer science by running a fluid dynamics problem using a code named CharLES that utilised more than one million cores in the hulking great IBM Sequoia at once.
Stanford super runs million-core calculation
Stanford University engineers are claiming a record for the year-old Sequoia supercomputer, after running up a calculation that used more than a million of the machine’s cores at once.
