What's standing at Intel's platform
Nibbling developers' heels
Slowly but surely the standard tasks of the developer’s daily grind are being absorbed and packaged up by a growing number of vendors. Systems management tools, for example. Vendors have already subsumed much of the management coding that would in the past have been the developer’s lot, and now Intel is casting its beady eye on the potential from the other end of the spectrum.
The company has been integrating large amounts of PC real estate into the processor, or the associated chipset, for some time. The graphics controller is one obvious example. But now it is looking at what constitutes a 'server' and starting to identify that functionality as targets it can integrate into its own architectures.
It has already integrated virtualisation into the processor with the new VT technology, and has recently also added power management. The next target, due to be implemented into the Dempsey dual-core Xeon DP processor, is Active Management Controller, a module capable of monitoring performance and similar factors that collectively sum up the 'health' of the processor.
According to Kirk Skaugen, VP of Intel’s Server Platforms Group, the company is working with the close collaboration of mainstream information management systems vendors such as IBM, HP, BMC and CA, as well as Symantec, LANDesk and Novell, so they can all interoperate with the on-chip functionality.
Also expected to appear soon is I/O Acceleration Technology (I/OAT) designed to significantly boost TCP/IP performance, and Skaugen indicated that other targets for integration are under development and scrutiny. Indeed, they will form integral parts of what he called a Formal Usage Model for the company's server platform, which will incorporate dynamic provisioning and services, and node configuration.
All this follows a pattern set out by Intel’s law-meister, Gordon Moore, many years ago. Speaking at the 1979 International Solid State Circuits Conference in Philadelphia, he observed that as device complexity increases the number and diversity of functions possible on a chip also increase. The danger with this is that it is all too easy to end up with an all-singing, all-dancing device that is so complex it does not fit the requirements of any server vendor.
But targeting increasing amounts of low-level, commonly used functionality has the potential to not only increase the value and margin of each processor, but also increase the dependence of users on the device. A 'Formal Usage Model' will inevitably be a two-edged sword for developers, especially as it grows, for they will have to be ready to grow with it if it is successful.
If it does succeed, it will have the effect of creating a new 'baseline' of services and functionality to which developers will have to work. This could have the distinct advantage of effectively standardising a growing range of common functions that will no longer need to be in the developer’s standard repertoire of coding skills. In turn, they will be free to start applying their talents at the next level of abstraction in applications and systems development.
But If Intel fails in making this work, either by picking the wrong functionality or by integrating too much functionality too soon, developers may well find those old skills will still be needed after all.
Sponsored: Network DDoS protection