Feeds

SGI to develop MIPS chips for Origin, Onyx

Life outside Itanium 2

  • alert
  • submit to reddit

Combat fraud and increase customer satisfaction

ComputerWire: IT Industry Intelligence

The impending McKinley Itanium 2 announcement from Intel Corp has all the RISC/Unix vendors redrawing or at least coloring in their own product roadmaps, and workstation and HPC server vendor SGI is no different,

Timoty Prickett Morgan writes

.

SGI is widely expected to make a statement of direction that will see the company push Itanium-based machines employing open source systems and middleware software along side its MIPS-based Origin servers and Onyx visualization systems (think of it as workstations created directly from slices of a parallel supercomputer and you'll get the right idea), which run the Irix variant of Unix.

Company executives have been taking a tour with the press and analyst communities to explain that the endorsement of Itanium 2 machines and open source software has not in any way undermined the company's commitment to 64-bit MIPS processors running Irix as its core strategic platform. Quite the contrary, in fact. SGI believes that its core high performance computing market is expanding fast enough to support both types of platforms and that HPC customers will want to indulge in these two different platforms depending on the their capacity needs and budgets.

When Silicon Graphics Inc got into dire financial straits a few years ago after a failed attempt to break into the Windows NT workstation market, the company in 1998 changed its name to SGI and spun out its embedded MIPS processor business as a separate entity.

Many people believed at the time that SGI was getting out of the business of designing processors in its Unix machines and many still unwittingly believe this today (Maybe that's because that's what SGI was saying in 1998? Ed - Reg.)

As the Itanium 2 processor looms large, SGI is taking the opportunity to remind those who have forgotten that it does in fact design its own 64-bit variants of the MIPS processors, as it has since the MIPS spinout, and that its chip fab partner, NEC Corp, is committed to cooking up these chips using the latest, greatest technologies so SGI can create powerful, dense workstations and servers for the demanding technical workloads that HPC users have these days.

Like IBM Corp, Sun Microsystems Inc, and Hewlett Packard Co, SGI has enhancements to its variants of the R series of 64-bit MIPS processors scheduled regularly over the next four years, and is, like these other RISC/Unix suppliers, working on advanced chip and server designs beyond this time.

SGI's job is somewhat simplified by the fact that its Origin 300 servers, which scale up to 32 processors in a single NUMA image, and Origin 3000 servers and Onyx 3000 visualization systems, which use NUMA to scale up to 512 processors in a single system image, are only targeted at HPC workloads rather than more generic commercial workloads like application or database serving.

Because SGI is focused on HPC performance, where memory and I/O bandwidth is perhaps as important as clock cycles and caches, SGI does not have to crank up the clock speeds of the MIPS processors as IBM, HP, Sun, and Intel have to do with their machines to keep pace with each other as they target clock-hungry commercial applications. SGI wants to build powerful, dense HPC servers.

This is why SGI is committed to the MIPS processors it designs, which the company believes will yield more powerful and, more importantly, more dense Origin servers and Onyx visualization systems than those that could be built using alternative chips like the Itanium 2, which runs at 1GHz but which throws off too much heat to be packed densely in the racks and racks of servers that dominate HPC centers.

If anything, explains Addison Snell, product marketing manager for high performance servers at SGI, the company is committed to keeping the clock speed on its R series processors as low as possible. " SGI is focused on delivering sustained performance across a wide variety of technical workloads," he says.
"We're purposefully not getting into the megahertz race. It is not appropriate for the high performance computing market."

Snell says that at 600MHz, the core of the R14000A processor - designed by SGI and built using a 0.13 micron copper process by NEC - throws off about 17 watts of heat. He says that this is smack dab in the middle of the range of 15 watts to 20 watts that SGI targets for heat dissipation levels with each of its MIPS processors.

By contrast, the Sun UltraSparc-III core throws off 70 watts to 80 watts depending on the clock speed, and that other RISC processors on the market and the future Itanium chips dissipate anywhere from 110 watts to 130 watts per processor core, according to Snell. This is obviously too much heat to tightly pack processors to create massively parallel supercomputers, or even dense minisupers.

The R14000 processor from SGI, announced in July 2001, was the first chip the company designed that changed from 0.18 micron aluminum to a five-layer 0.13 micron copper process. The R14000 ran at 500MHz and delivered a peak 1 gigaflops of number-crunching power per processor. Like earlier R series processors, it has 8MB of external L2 cache. The R14000 was a shrink of the 400MHz R12000 processor, which delivered two floating point operations per second or 800 megaflops of power. In February 2002, SGI announced the R14000A, the current top-end chip in its servers, which uses a seven-layer 0.13 micron copper process that allows the MIPS core to be shrunk enough so it can run at 600MHz instead of 500MHz. Snell says that SGI's installed base has moved to the 500MHz R14000s and is moving ahead with the 600MHz R14000As.

Sometime in 2003, SGI and NEC will move the MIPS processor to a 0.11 micron, eight-layer copper process that will enable the MIPS chip to run at 700MHz and deliver 1.4 gigaflops of processing power. This chip is code-named "N0" and may be branded as the R16000.

In 2004, SGI will debut the "N1" processor, which will have two floating point units instead of one, an additional load/store unit, L2 cache memory (size unknown) on the chip die, L2 and L3 cache directories on chip, and a new microprocessor bus with four times the bandwidth of the current R series of chips. The quadrupling of bus bandwidth will be necessary because the N1 processor, which may be marketed as the R18000, will come in single-core and dual-core implementations. The N1 processors will be created using a nine-layer, 0.11 micron copper process and will have a core frequency of 800MHz. So a single core N1 processor will deliver a peak 3.2 gigaflops of power and a dual-core N1 will deliver 6.4 gigaflops of peak processing power.

The "N2" processor that is set to debut in 2005 is still in the definition stages, and may be called the R20000. SGI says that the single core version of this processor will, at 1GHz or higher clock speeds, deliver a peak 8 gigaflops of floating point performance, and the dual-core version will deliver a peak 16 gigaflops. These numbers seem to imply that the N2 chips will have four floating point units, each capable of performing two instructions per clock, compared to the single FPs used in the R14000 and R14000A chips today.

© ComputerWire

3 Big data security analytics techniques

More from The Register

next story
This time it's 'Personal': new Office 365 sub covers just two devices
Redmond also brings Office into Google's back yard
Kingston DataTraveler MicroDuo: Turn your phone into a 72GB beast
USB-usiness in the front, micro-USB party in the back
AMD's 'Seattle' 64-bit ARM server chips now sampling, set to launch in late 2014
But they won't appear in SeaMicro Fabric Compute Systems anytime soon
Brit boffins use TARDIS to re-route data flows through time and space
'Traffic Assignment and Retiming Dynamics with Inherent Stability' algo can save ISPs big bucks
Microsoft's Nadella: SQL Server 2014 means we're all about data
Adds new big data tools in quest for 'ambient intelligence'
prev story

Whitepapers

Mobile application security study
Download this report to see the alarming realities regarding the sheer number of applications vulnerable to attack, as well as the most common and easily addressable vulnerability errors.
3 Big data security analytics techniques
Applying these Big Data security analytics techniques can help you make your business safer by detecting attacks early, before significant damage is done.
The benefits of software based PBX
Why you should break free from your proprietary PBX and how to leverage your existing server hardware.
Securing web applications made simple and scalable
In this whitepaper learn how automated security testing can provide a simple and scalable way to protect your web applications.
Combat fraud and increase customer satisfaction
Based on their experience using HP ArcSight Enterprise Security Manager for IT security operations, Finansbank moved to HP ArcSight ESM for fraud management.