Feeds

Intel aims 30W Nehalem at 'microservers'

Two-core 'Clarksdale'

Secure remote control for conventional and virtual desktops

Intel will soon offer a two-core 'Clarksdale' processor rated at a mere 30W, aiming to slip this low-power chip into the super-svelte "microservers" it first trumpeted at its developer forum last month.

Speaking with reporters at an event in San Francisco this morning, Intel high-density computing boss Jason Waxman said the chip will be available "in a couple of months."

Chipzilla mentioned the new 30W Nehalem chip at IDF in rolling out a reference design for its new breed of low-power microserver, and Waxman now tells The Reg it will clock at 2.26GHz.

The company's current reference system - made up of hardback-sized PCBs (printed circuit boards) packed with a CPU and four memory DIMMs - uses a 1.86GHz, 45W quad-core 'Lynnfield' chip already on the market. "We're looking to define a new form factor that allows companies to come up with a uni-processor [machine] that's reasonably capable...and cost-effective and easy to deploy," Waxman said.

"We want this to become a new building block for the types of applications where you have lots of web servers or a hosting type of environment or something where you need many images of a server."

Intel hopes to hone these PCBs to the point where they idle at a total of 25W and top out at 75W when performance is cranked to 11. With the reference system Intel demonstrated today, sixteen of the PCBs slot into a master chassis that also houses storage drives.

How is this different from a blade? "A blade system to me is when you have centralized management, a fabric, and you're trying to manage storage, networking, and compute all through a common interface," Waxman says. "The reality is that these [microservers] are just servers." In other words, a simpler, low-cost system for simpler, low-cost setups. "What we're really trying to do is drive low costs. You can come up with a system that gives reasonable performance that still drives low power."

And why not just virtualize? "For some service providers, virtualization is messy. You may need to have central storage for example," Waxman says. "And it may come down to comfort level. A lot of customers [of service providers] like to own a piece of metal. They want to have something that they know is dedicated to them. It's not necessarily justified, but some have the belief that it's a little bit more of a secure type of solution if they're the only one with root-access to the server." ®

Secure remote control for conventional and virtual desktops

More from The Register

next story
Ellison: Sparc M7 is Oracle's most important silicon EVER
'Acceleration engines' key to performance, security, Larry says
Linux? Bah! Red Hat has its eye on the CLOUD – and it wants to own it
CEO says it will be 'undisputed leader' in enterprise cloud tech
Oracle SHELLSHOCKER - data titan lists unpatchables
Database kingpin lists 32 products that can't be patched (yet) as GNU fixes second vuln
Ello? ello? ello?: Facebook challenger in DDoS KNOCKOUT
Gets back up again after half an hour though
Hey, what's a STORAGE company doing working on Internet-of-Cars?
Boo - it's not a terabyte car, it's just predictive maintenance and that
prev story

Whitepapers

A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Beginner's guide to SSL certificates
De-mystify the technology involved and give you the information you need to make the best decision when considering your online security options.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.