Feeds

Intel aims 30W Nehalem at 'microservers'

Two-core 'Clarksdale'

Next gen security for virtualised datacentres

Intel will soon offer a two-core 'Clarksdale' processor rated at a mere 30W, aiming to slip this low-power chip into the super-svelte "microservers" it first trumpeted at its developer forum last month.

Speaking with reporters at an event in San Francisco this morning, Intel high-density computing boss Jason Waxman said the chip will be available "in a couple of months."

Chipzilla mentioned the new 30W Nehalem chip at IDF in rolling out a reference design for its new breed of low-power microserver, and Waxman now tells The Reg it will clock at 2.26GHz.

The company's current reference system - made up of hardback-sized PCBs (printed circuit boards) packed with a CPU and four memory DIMMs - uses a 1.86GHz, 45W quad-core 'Lynnfield' chip already on the market. "We're looking to define a new form factor that allows companies to come up with a uni-processor [machine] that's reasonably capable...and cost-effective and easy to deploy," Waxman said.

"We want this to become a new building block for the types of applications where you have lots of web servers or a hosting type of environment or something where you need many images of a server."

Intel hopes to hone these PCBs to the point where they idle at a total of 25W and top out at 75W when performance is cranked to 11. With the reference system Intel demonstrated today, sixteen of the PCBs slot into a master chassis that also houses storage drives.

How is this different from a blade? "A blade system to me is when you have centralized management, a fabric, and you're trying to manage storage, networking, and compute all through a common interface," Waxman says. "The reality is that these [microservers] are just servers." In other words, a simpler, low-cost system for simpler, low-cost setups. "What we're really trying to do is drive low costs. You can come up with a system that gives reasonable performance that still drives low power."

And why not just virtualize? "For some service providers, virtualization is messy. You may need to have central storage for example," Waxman says. "And it may come down to comfort level. A lot of customers [of service providers] like to own a piece of metal. They want to have something that they know is dedicated to them. It's not necessarily justified, but some have the belief that it's a little bit more of a secure type of solution if they're the only one with root-access to the server." ®

Next gen security for virtualised datacentres

Whitepapers

Endpoint data privacy in the cloud is easier than you think
Innovations in encryption and storage resolve issues of data privacy and key requirements for companies to look for in a solution.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Advanced data protection for your virtualized environments
Find a natural fit for optimizing protection for the often resource-constrained data protection process found in virtual environments.
Boost IT visibility and business value
How building a great service catalog relieves pressure points and demonstrates the value of IT service management.
Next gen security for virtualised datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.