Google and Microsoft have nothing on - drum roll - the SuperNAP
A $350m data center giant in the desert
Exclusive $500m. That's the going rate for a data smelter these days. You know, a facility run by a company such as Google or Microsoft that moves bits around and consumes more power than old-school metal processing plants.
Most companies in need of such horsepower go out of their way to build the computing centers in cities with cheap power. To date, that has meant tapping hydroelectric power in the Pacific Northwest or scheming tax breaks out of city officials in places like Oklahoma or South Carolina to get a "per watt" edge. So imagine our surprise upon learning that one of the world's most tightly-packed and energy-demanding data centers will go up in Las Vegas - a place where desert-bound casinos suck up huge amounts of electricity to fuel their neon signs, slot machines and suites.
In the coming months, a little known technology giant called Switch Communications will open the SuperNAP. This 407,000 square foot computing compound will house servers and storage systems owned by many of the world's most prominent companies. And, unlike most centers of its kind, the SuperNAP will not rely on raised floors or liquid cooling systems to keep the hardware humming. Instead, it will be fueled by custom designs that allow it to maintain an astonishing 1,500 watts per square foot - or close to three times the industry standard.
(Those of you interested in the fantastic history of Switch Communications can check out our exclusive feature on the company here. Those of you with data center dreams will want to continue on with this piece.)
Switch has operated co-location facilities in Las Vegas for about eight years. The company lays claim to a unique set-up, as it owns a huge networking facility where more than 20 of the US's major carriers funnel their traffic and a number of data centers make use of all that bandwidth. These state-of-the-art computing centers have attracted a number of Fortune 100 companies, including technology and media heavies.
"In my opinion Switch has the finest data centers available anywhere," said David Matanane, the senior manager of hosted services at Cisco Systems.
The SuperNAP stands as the culmination of everything Switch has learned from these businesses to date.
The facility will make use of a custom Switch concept dubbed the T-SCIF or Thermal Separate Compartment in a Facility. The T-SCIF is sort of like a little shack for hardware. Customers slot their systems into the unit with the front half of the hardware sticking out into the main data center room and the back half sitting inside the T-SCIF. This approach makes sure that only cooled air reaches the front of servers and storage boxes, while all of the hot air is released into the sealed T-SCIF and then expelled through a series of ducts.
"We can do 500 or 600 per cent more cooling per cubic feet per minute than everyone else who designs their data centers with raised floors and cooling systems from Liebert," Switch CEO Rob Roy told us. "The raised floor kind of works against the laws of physics. Cold air does not want to fly up through a room. Everyone in the world knows that is probably not the right way to approach things."
With the T-SCIFs, Switch makes sure that cold air and hot air never intermingle. As a result, the hardware receives near uniform cooling with 68 degree Fahrenheit air rushing into the boxes.
Beyond these cooling systems, Switch has set up a sophisticated power system inside of its data centers. The company promise 100 per cent uptime thanks to a power distribution arrangement that's divided in Red, Blue and Grey systems. The company literally color codes all of its gear, making sure employees only fiddle with gear on one color scheme per day. The hardware is then connected to at least two of the grids, and Switch says it can survive just about any type of outage - be it at the utility company or Switch itself - because the company has access to a number of different suppliers.
To feed the SuperNAP, Switch will take this system to the next level running what it calls a power spine down the center of the 407,000 square foot facility.
Switch has very close ties to the local energy concerns, letting it get power at about 5 to 6 cents per kilowatt hour. (The likes of Google can enjoy about 3 cents per kilowatt hour in some places thanks to tax breaks. Here's to you, local taxpayer. Google needs the money.)
The SuperNAP will eat up more power than three mega-casinos put together, but Roy isn't worried about running out of juice anytime soon. Las Vegas has access to power generated by the Hoover Dam and power plants being built to fuel California.
SuperNAP - By The Numbers
- 407,000 square feet of space
- 250 MVA Switch owned substation
- 146 MVA of generator capacity
- 84 MVA of UPS supply
- 30,000 tons of system plus system cooling
- 4,500,000 CFM
- 30 cooling towers
- 1,500 watts per sq. ft. density
- 7,000+ cabinets
To get all of the power and cooling into the SuperNAP, Switch has again turned to homegrown products.
Sponsored: IBM FlashSystem V9000 product guide