Chip start-up could ignite Blade PCs
Living the PC lifestyle sans the space heater
Comment Stealthy chip start-up Teradici Corp. in Burnaby, British Columbia emerged from behind the curtain on Tuesday to reveal their long anticipated semiconductor fix for the remote PC desktop dilemma. The product, geared towards OEM systems manufacturers, consists of a pair of chips designed to overcome the shortcomings of existing Blade PC solutions.
Several systems companies have attempted to address this market and some have been reasonably successful. ClearCube was the original Blade PC pioneer and soon after HP, and now IBM, have followed, leveraging their well engineered blade server portfolios for the base platforms. All three have employed thin clients in tandem with their blades, and ClearCube also has an additional option that uses a proprietary "homerun" cabling scheme. Now IBM and ClearCube have announced products incorporating the new chips from Teradici. The market is still in its infancy and some feel (I am one) that Blade PCs will not gain wide adoption until the thin client issues are removed, and users can enjoy a full PC experience.
The recent Blade Computing Revolution means that IT departments are now getting used to the idea of managing large numbers of servers in a compact physical 'blade' form factor. This is a big driver. The major benefit of blades is that they share their infrastructure - power, cooling, networking and the enclosure - thus enabling far superior efficiency in the data center as opposed to those inefficient, little individual sheet metal boxes that once plagued (and still do) raised floor data centers everywhere. These days, the power savings alone are enough to get companies to switch from white boxes to more expensive blade systems. Another advantage stems from blades' hot-pluggability – being able to quickly and easily replace or upgrade blades. So, they are far easier to maintain. Corporate PCs have never been able to realize these benefits until now because they have always had to be physically distributed throughout the enterprise in order to be within close proximity to their users. Breaking this tight link between PCs and users represents a seminal moment in PC history. This is exactly what the folks at Teradici have done.
Teradici has engineered what it calls a "PCoIP" chip (PC-over-IP). The solution entails one chip that resides on a blade computer (called a Blade PC in this case, because it's not a server) in the data center and another that resides in a small box on the users desktop that requires no management. These chips communicate with one another via Ethernet and can do so through switches and over great distances, breaking the link between a user and his PC. Teradici's secret sauce is in how they analyze, compress, packetize and then transmit two full DVI graphics streams, USB (keyboard, mouse and peripherals) and hi-definition stereo audio over standard Ethernet with almost no noticeable latency and almost no loss of image quality. Think of it as a kind of KVM on steroids.
This concept has been around for years in the form of thin clients. Thin clients, however, have suffered from the fact that they're in effect just thinner computers. That means they also require management to nearly the same extent that IT departments manage PCs on corporate networks. Thin clients run a host OS and are susceptible to hostile attack by hackers, viruses, bugs and physical theft just like desktop PCs. So instead of easing IT managers' jobs by eliminating touch points, thin clients actually cause them now to manage two devices instead of one.
The key benefit to Teradici's desktop portal solution is that there's nothing on the desktop to manage.
The only thing that remains on the desktop is a small, relatively inexpensive, solid state and stateless device called a desktop portal which the user's graphic display (up to 2 DVI displays per portal), keyboard, mouse, audio and USB devices plug into. Physical and electronic security is greatly enhanced since there's no longer anything on the users desk that contains sensitive company data. It is simply the Teradici chip and its related circuitry and connectors inside an enclosure the size of a small external modem or router. It consumes only 15 watts of electricity, so it can be powered over Ethernet (PoE) and can be designed to be fanless and make no sound whatsoever. If it happens to go missing from the company, no problem; it contains no user data and requires access to the corporate network and a valid user login credentials in order to function at all. The user's PC and data remains unaffected on its blade back in the data center. In other words, the desktop portal is essentially useless unless connected to the corporate network. This makes moves, adds and changes within the new system a breeze as well.
The Blade PCs themselves can be made much more reliable than traditional box PCs because they are no longer subjected to all the environmental hazards and abuse that they receive in the typical office cubicle. By moving the PC to a blade back in the data center, it is able to be cooled, protected and maintained properly, greatly enhancing their life expectancy. The systems should also be able to be made less expensive in the long run because they don't need to have all the redundant components and sheer volume of materials used in today's bulky PCs. Sharing of the power supplies, cooling and networking means that you can add resiliency (redundant load sharing power supplies and fans) while also decreasing component count, which also increases system reliability. The Teradici solution is completely OS, graphics and microprocessor independent as well, which means that it can be employed in an infinitely wide variety of CPU and OS combinations (Win, Mac, Linux, Unix, Intel/AMD x86-64, PPC, Sparc, MIPS, EPIC, etc.).
One of the most compelling reasons large enterprises will want to move to Blade PCs is the situation where employees work in shifts and only a fraction of the PC using workforce is online at any one time. In this case, instead of maintaining unique PCs for every single user, a company could make do with a shared pool of Blade PCs only slightly larger than the maximum number of concurrent users online at peak periods. Call it the Utility PC model, if you will. This strategy means better utilization of resources and the company might only need to purchase 750 PCs for a staff of 1,000 for example. Quite a cost savings.
User data is stored and accessed on a Network Attached Storage (NAS) server on the corporate LAN and individual applications can be loaded on start-up according to a custom profile based on the users login credentials. A connection broker (intermediary server software that controls connections between users and blades) directs the user to his or her blade, loads the appropriate applications for their job function, and points the blade to the users network storage space. When the user logs out, the Blade PC is released back into the shared pool, ready for use by a new user and the whole process starts again. There are many other advantages to this strategy that we don't have the space to touch on here.
Expect to see a number of blade vendors embracing this concept in the near future with Blade PC solutions targeting specific vertical markets that have high density PC user environments such as call centers, wall street traders, engineers, schools, hospitality, healthcare, etc. Basically, any place where traditional box PCs are used in large volume is a potential candidate for Blade PCs. These solutions will come in at the mid to high end of the market first, and then trickle down to the low end, eventually maybe even to your home in the form of a monthly subscription offer from your broadband service provider. The next wave of innovation in this space will be when blade PC vendors introduce virtualization to the mix – enabling multiple PC users to be hosted on the same blade – further increasing efficiency and bringing down the cost and complexity of deploying and managing PCs.
Brace yourselves for more hoopla in the coming days about the next wave in the Blade Computing Revolution: Blade PCs. And, the exciting part for me is that since the PC market is roughly tens times larger than the server market, it has the potential to have an even bigger impact on the industry than blade servers are already having. ®
Chris Hipp was co-founder and CTO of blade server pioneer RLX Technologies (WMV). He has received numerous industry awards and is the holder of 5 patents referring to blade server system architecture. He is a founding member of, and technical advisor to, the Blade Systems Alliance and has been an invited speaker and guest participant in numerous industry conferences. Disclaimer: Hipp is an advisor to Teradici.
Hey guys, a better solution already exists !!!
The problem with blade PCs is that you keep most of the hardware and software potential issues...That's why I prefer a virtualisation solution: you keep your virtual machines on a secure, redondant and powerful server...and you can dynamically allow them more or less ressources, move them, etc...I heard that NEC launched several months ago that kind of solution ("Virtual PC Center"). I found thanks to my best virtual friend (Google) that in fact it is based on a server hosting virtual PCs: each user connects to his virtual machine from a tiny thin client that has a magical embedded chip. The virtual machine sends compressed data to this chip, so that it can even display HD quality video. And this embedded chip has a hardware-based VoIP capability too. So, I do not see what Teradici's innovation is...
The problem with Blade-PC's is the fact that they are a PC, with applications with huge memory footprints. When I last used X-Terminals in anger, we had a ratio of about 10 X-Terminals per (not very big) server, and because the software was not PC based, we got reasonable performance. Add to that the fact that you can beef up the performance by adding dedicated specialist servers elsewhere on your network that work just as well as the controlling server in delivering applications. Real distributed computing.
Sun once said "The network is the computer", and I believe it to be the case.
BTW. The AT&T systems mentioned by Brett were called BLITs (Bell Lab Inteligent Terminals) 5620 and 630 (and I know that there were later models) which worked over serial, proto-TP-Ethernet (called StarLan) or full blown twisted pair Ethernet (later models). They ran a proprietery OS that was probably called Layers (my memory fades), and allowed windowed dumb terminal, or locally run, downloaded applications.
Blade PCs don't work well...
... unless you use them 1:1 (expensive) or virtualise (requires more beefy hardware).
For whatever reason, our management decided to go for blades for a remote office (blades in London, users in Europe) but ClearCube didn't (at the time) advise us to virtualise and so we've had endless problems with users sharing OS resources on the blades. I'm now trying to start from scratch and redo it all as virtual PCs, but it's a maintenance headache.
The whole blade PC thing sounds great to management but to the techs who actually have to implement and maintain it it just means more work than necessary. (This is speaking only as a ClearCube user; HP or IBM's solutions might be completely different.)
"Interesting article, however, describing an existing Thin-Client as a thin-pc is incorrect - Thin-clients do NOT have a host OS (Other than a very basic embedded OS in BIOS)"
Actually ClearCube's I/Port (works over ethernet instead of local copper/fibre) runs Windows XP Embedded. These are a complete PITA to maintain and can brick easily when trying to update the image on it.
Hm... but I was more impressed by the SunRay clients.
And UNIX has not one, but *two* ways of achieving true networked systems without having data spread all around the workstations:
- NFS/NIS (or NFS/LDAP maybe?), workstations have only the OS installed and configured. Home directories are NAS'd.
- Passive X terminals. All the heavy duty processing is done server-side. With some truly evil behemoth servers I've seen, that doesn't seem so far-fetched...
Though the "KVM over Internet" solution could well serve other purposes... maybe remote emergency server administration? When your server goes down, or gets stuck in a "Press any key on console" message. Sometimes you don't even have physical access to the server...
No room in the brain
I don't think our corporate IT people (outsourced) could do anything innovative.