New Mexico bets future on promiscuous supercomputer
Tap our big box
Can a supercomputer save a state? Former Presidential candidate Bill Richardson sure seems to think so.
Richardson has been championing a massive Xeon-based system dubbed Encanto. The supercomputer currently holds the third slot on the Top500 list, which tracks the performance of large computers, and sits at the center of the New Mexico Computing Applications Center project.
New Mexico's bureaucrats want to turn this supercomputer into a for rent system that educational organizations and businesses will pay to use. In so doing, they believe that scientists will flock to New Mexico to access world-class hardware.
"It is not simply a high-tech toy for elite scientists," governor Richardson said during an event last month. "This project invests in our future."
Universities and others have looked for ways to rent space on large computers for decades now. We, however, can't rememberer too many instances when a state bet part of its future on the idea.
New Mexico dished out $11m for the Ecanto system, which actually resides at an Intel facility. The supercomputer consists of more than 3,500 of Intel's four-core Xeon (Clovertown) processors, 28TB of memory and 172TB of storage. Did New Mexico - "the Land of Enchantment" - get a sweet, sweet discount from SGI and Intel for all this hardware? We're thinking so.
An appropriation pushed through in 2007 cleared the way for the supercomputer. The state put forward the $11m for hardware and another $3m to cover so-called gateways that will allow different groups of researchers to tap into the box. Initially, The University of New Mexico, New Mexico Tech and New Mexico State University will use Encanto with other academic bodies receiving access over time.
(How can "gateways" cost about 25 per cent of the entire supercomputer? Your guess is as good as ours. Apparently, we're talking about some really, really big workstations.)
Now the New Mexico Computing Applications Center (NMCAC) wants $6m more for other gateways and staff. Oh yeah, it needs about $2m a year to power and cool the supercomputer beast a year as well.
Despite relying on so many handouts to get this project going, some bureaucrats believe they're sitting on a goldmine - one that can provide jobs in a depressed New Mexico.
According to New Mexico Tech computer science professor and NMCAC education director Lorie Liebrock, the new supercomputer could become self-sufficient within five years by renting cycles. Government bodies and consultants could use the system for water modeling, city planning and forest fire simulations, while businesses could pay a bit to tap into a world-class supercomputer for large jobs.
"Other supercomputers like this in other states are all paid for by those states," Liebrock told a local paper. "This one is unique; it will be self-sustaining. Other states can't say that."
This all seems a tad far-fetched though given the high annual cost of running a system like Encanto and the fact that it will be a very average supercomputer - if there is such a thing - in five years' time.
But we'll give these New Mexicans plenty of credit if they can turn a profit on the Encanto giant.
The state thinks that having an open supercomputer will not only bring in business but encourage science-related jobs in the state.
New Mexico: We've got a really big box. Plug on in. ®
@Space & follow-ups
Good point on the cooling. The bigger point is CPUs, however: space-hardened hardware is, by today's standards, agonizingly slow with IBM's space-certified POWER chips topping out at approx. 200 MHz and Intel, IIRC, currently offering an 80486 for space use. Any current off-the-rack CPUs would be toast within a very short period of time due to all sorts of irradiation that we down here on terra firma are shielded from thanks to our atmosphere plus magnetic field.
So a space supercomputer would have approx. the effectiveness of, say... oh shucks, just go to Walmart and pick up their top-of-the-line office PC. It'll have more oomph and be lots cheaper to operate. Plus, far easier to repair.
Distributed computing with subsidies
With projects like seti@home and folding@home now time-tested and demonstrable supercomputing solutions, I would think the days of massive centralised supercomputers are numbered.
Now, the current limitation of distributed computing projects is that they only work if they capture the public imagination, such as the idea of helping to find alien civilisations. Such mundane tasks as city planning and infrastructure logistics aren't likely to induce millions of people to donate their CPU cycles to the cause - boring!
This problem, however, could be alleviated by offering to subsidise part of users' ISP bills in return for running the distributed computing software on their machines. Considering the millions of dollars spent on building and maintaining supercomputers, I'm certain that a cost/benefit analysis would indicate that a subsidy system would be a far cheaper way of effectively using distributed computing in a supercomputing application. If you set it up so that for every X work units processed we'll pay Y dollars toward your next internet bill, you'd find that a lot of people would join in the distributed system who otherwise would have no incentive to do so. Of course, you'd have to impose a limit on the number of clients you allow to join in any given project, based on the computational requirements of the project, the timeframe for completion, and the budget. You'd then have a central website where people could sign up for these subsidised projects on a first-in-first-served basis. This system then means the only cost is the subsidy and central server maintenance, instead of a massive energy-consuming beast of a data centre that costs millions a year to run and maintain.
Seti here we come!
I don't know but New Mexico and supercomputer just makes me have to think about the seti program