Intel: 3D Web to save HPC
The super killer app
3D in fashion
To make the 3D Web a little more human and a little more appealing to businesses, Rattner trotted out Shenlei Winkler, a fashion designer and chief technology officer at the Fashion Research Institute, which has created a 3D immersive clothing design simulator. Winkler said that the $1.7 trillion apparel industry is largely still uncomputerized, and designers still work with sketches that they send overseas to factories (usually in Asia) that mock up prototypes.
Then designers see how the clothing looks, make changes, and get another set of prototpyes made. By doing a better job simulating human bodies and cloth types, FRI's 3D clothing design system has been able to slash design time by 75 per cent and reduce physical clothing sample costs by 65 per cent. Winkler said that what clothing designers really want to do is create clothes in real time with actual customers, simulating all of the sophisticated movement of cloth on avatars of specific people and do a virtual catwalk.
Here's the problem. Rattner showed a pretty slick simulation of a piece of silk cloth falling onto a wood pedestal and then slipping to the floor. While not a simulation that would trick the human eye, this nonetheless was much slicker than anything you will ever see in a video game or a virtual world. But on a cluster or servers, it took six minutes to calculate each frame of the simulation.
"That is pretty damned slow," Rattner said. "HPC community, how are we going to do that in real time?"
One answer that you can expect Intel to give is the coupling of its Xeon processors to Larrabee graphics co-processors. This being a tech show, and Rattner being CTO, there has to be some chip to show off, and Rattner brought a workstation equipped with a single Larrabee co-processor and put it through the paces on some benchmarks.
On the SGEMM single precision, dense matrix multiply test, Rattner showed Larrabee running at a peak of 417 gigaflops with half of its cores activated (presumably the 80-core processor the company was showing off last year); and with all of the cores turned on, it was able to hit 805 gigaflops. As the keynote was winding down, Rattner told the techies to overclock it, and was able to push a single Larrabee chip up to just over 1 teraflops, which is the design goal for the initial Larrabee co-processors.
Here's the next problem. Sparse matrix math is what is commonly needed in simulations involving cloth and water. And on that test, a Larrabee chip that was not overclocked was able to do between 7.9 and 8.1 gigaflops, depending on the test and the size of the matrices.
How many Larrabee chips will we all need to buy to simulate ourselves in virtual worlds? How many will be needed to simulate those virtual worlds? Rattner did not say.
But what he did say is that the Ct dialect of C++ that Intel has created will be going into beta soon to help with the parallelization of C++ code to run on multicore and multithreaded processors, and more importantly, to spread code across CPUs and GPU-based co-processors in workstations and services to maximize performance as transparently as possible. Ct will work in conjunction with the CUDA environment from Nvidia for its GPUs and for the OpenCL environment being pushed by Advanced Micro Devices and others.
Intel is also cracking the issue of sharing data between Core and Xeon CPUs and Larrabee GPU co-processors. Future Core and Xeon chips will be able to create a virtual shared memory pool that both the CPU and GPU can access so datasets are not crunched down, serialized, and moved over the PCI-Express bus from the CPU to the GPU and then back again after calculations are done. The shared virtual memory allows the CPU and GPU to work off the same data in sequence without any movement, which should radically improve performance and smooth out simulations.
The 3D Web, says Rattner, will also require open standards. People will want to create an avatar once and teleport it to any world and be able to bring all their virtual stuff with them.
"There is no standard to move between virtual worlds, and this should give you a touch of deja vu," Rattner said. In the late 1980s, when online services like CompuServe, AOL, and Prodigy were being launched and the Web as we know it did not exist as a commercial entity, it was HPC researchers like Tim Berners-Lee at CERN and Marc Andreessen of the University of Illinois who cooked up the Web interface and perfected the Web that made it useful.
"It can be our job again to bring order to this chaos," Rattner declared.
Crista Lopes, a researcher at the University of California at Irvine has created an open source simulation environment that is extensible and modular (meaning you can yank out an replace components to, say, add the physics engines necessary for simulating cloth instead of some other engine). This simulation environment, called OpenSim, has already been used to create an interconnected set of worlds called HyperGrid that allows avatars to move from world to world, retaining their identities and virtual stuff.
Welcome to the World Wide Waste. Or the Matrix. I think I prefer to fight SkyNet. ®