8th > May > 2003 Archive

Hi3G opens stores in Austria, Sweden

Hutchison's 3G European roll-out is gathering pace with '3' store openings in Austria and Sweden. In Austria, the mobile network operator is offering the full-on video-calling 3G experience in Greater Vienna, Linz, St. Pölten, Eisenstadt and Graz. More regions come on-stream later this year. Subscribers have national 2G coverage through a roaming agreement with mobilkom Austria. In Sweden, Hi3G's subscriber push kicked off this week by taking pre-orders at its own stores in Stockholm, Gothenburg and Malmo, as well as through other retail outlets. The company is offering handset discounts and sundry payment packages, backed by, we infer if the UK is anything to go by, saturation advertising. ®
Drew Cullen, 08 May 2003

To patch or not to patch

We know one of the biggest security vulnerabilities is not technology per se but the implementation of technology, writes John McIntosh, Bloor Research. When it comes to security patches, we often find ourselves in a position where risk versus reward is uncertain. Nowhere is this more prominent that with Microsoft's servers, as evidenced by Slammer. Microsoft's UK Architects Council debated the patch management issue fairly recently, without a satisfactory outcome. Could do much better, was the opinion back to Microsoft. Though, to be accurate, not all of it was security-related. Patching for security reasons is a tough call - is the threat great enough to warrant it. System maintenance should be a structured and methodical exercise but security patches can throw a spanner in the works and give unforeseen results. There should be a better way to solving such problems without, as many do, hoping for the best - either with or without the patch - until you know what the impact of the patch is likely to be. It is intriguing when someone comes up with a piece of lateral thinking that could significantly change the way we do things. Not necessarily removing a problemm but giving us a more efficient and predictable alternative. The Virtual Patch capability within Internet Security Systems, Inc.'s (ISS), Dynamic Threat Protection offering seems to be just that. The Virtual Patch process automatically updates all ISS network, server and desktop devices, enabling them to detect the exploitation of a newly discovered vulnerability before the requisite security patches are either available or applied. In so doing, Virtual Patch provides the protection without the need to directly apply the patch. In many respects this is a major improvement to the way in which security maintenance is performed, as it means that security operations can be handled as part of a normal IT change management process. After all, would you rather continue to manage the fix and reboot alternative? In addition to the more effective threat management that Virtual Patch offers, there are cost and time benefits that, when compared to current methods, should greatly assist in TCO reduction. ISS is not alone in this thinking. AppSense has a similar capability, Although it does not make much noise about this. Application Manager, part of the AppSense Solutions Suite, is a system management and application security product. It's "trusted ownership" security model integrates closely with the native filesystem security features provided by Microsoft Windows NT/2000/XP. AppSense is designed to eliminate all user-introduced applications and scripts by controlling any application down to the level of an individual thread. This means that the security and integrity of the system is significantly enhanced as all unwanted applications and malware are prevented from executing. Software installation and configuration can be controlled centrally, eliminating the need to visit individual computers. It is this capability that offers a comparable approach to patch management as offered by ISS. The most striking thing about the appliance market, generally and highlighted by ISS and AppSense, is that there are many ways in which vendors are tackling similar problems and many ways in which solutions can be tailored to extend their perceived capabilities. No one vendor has all the pieces to the jigsaw. The logical development in this sector is consolidation driven by the demand for integrated, multi-layer appliances. Both ISS and AppSense, along with many others, are still busy in the point solution space, improving what they have. It might be useful for them to start looking out beyond their current boundaries. In the meantime, the patch or not to patch dilemma might just have got a bit easier to resolve. © IT-Analysis.com
IT-Analysis, 08 May 2003

Moores Law retains grip on IT statute books – IBM

"The power of computers has increased by six orders of magnitude in the last 36 years and it will increase by a further six orders of magnitude in the next 36 years", claimed Nick Donofrio, IBM's Senior VP of Technology and Manufacturing to an audience of IT analysts at IBM Pallisades, writes Robin Bloor of Bloor Research. 'Six orders of magnitude' is a math-speak for "a million-fold" so Nick was telling us on the one hand what we already knew, that Moore's Law has been operating since the late 1960s, and on the other hand, professing a belief that it would continue to operate for the foreseeable future. He has reasons for his convictions and, in a fascinating address, he referred to various areas of research that IBM was involved in which led him to conclude that Moore's Law will remain on the IT statute books. Here they are: Nanotube technology. Nanotubes are microscopic tubes constructed from carbon rings which can be used to build logic circuits. Currently this technology is between 50 to 100 times denser and therefore faster than current silicon. So in its current infant state, it offers about two orders of magnitude improvement and is expected to offer more in time. Nanodisk. IBM has built nano-machines that can store data on and erase data from a surface by puncturing a hole in it (or removing it by heating the surface up), using an array of minute cantilevered arms. This is effectively a nanodisk which is 25 to 50 times smaller than current disks and can probably be made even smaller. The Molecular Cascade. IBM has been building molecules using an electron tunneling microscope. One of the things it has built is a big molecule that can act rather like Babbage's computer as originally conceived with balls rolling down paths and passing through gates, except of course that the balls in this instance are atoms. It is thus possible to build a molecular computer, the smallest nano-machine yet conceived. This on its own would deliver the six orders of magniutude that Nick Donofrio is looking for. The Quantum Computer. A quantum computer is an extremely small photon driven device which can perform some kind of useful logical work, particularly in the area of encryption. A working device would be 6 orders of magnitude faster than current computers. These were not the only futuristic developments that Nick Donofrio dealt with. He said that in the next 10 years IBM expected an explosion in secure sensor based computing. This is the broad extension of the use of sensor devices in cars in order to optimise engine performance, except of course that sensors will be embedded everywhere, allowing the optimised behaviour of just about any device you can think of in conjunction with any other. Clearly there are a host of applications in the home and in offices. He also mentioned Web Fountain, the result of an IBM research initiative. This is an intelligent search technology, which he claimed had the ability to assemble a 'domain of expertise" which could then be queried. Think security and the idea of assembling a coherent body of knowledge on a terrorist organisation. IBM intends to offer this technology as a service rather than a product. Finally he made some wry comments about IBM's Linux watch - a research product which IBM has gradually been evolving. Currently it has GSM, is able to record movement, read vital signs from the owners body and has wireless connectivity, and it also tells the time. He said that the watch was never intended to become a real product, as its form factor was very large, but he noted that large watches were currently becoming fashionable. © IT-Analysis.com
IT-Analysis, 08 May 2003

Palm OS 6 to be designed for wireless

Palm OS 6 will ship to licensees by the end of the year, PalmSource says. In an interviewe with CNET, Albert Chu, PalmSource's VP of business development, talks of a 12-18 month OS release schedule, taking day one as the date when hardware based on the previous release ships. Palm OS 5 devices from Palm and Sony appeared last autumn, timetabling the OS 6 release somewhere during Q3 and Q4 this year, with devices following early 2004. Last we heard, Palm was shooting for a mid-2003 release, suggesting development of the new OS is taking slightly longer than planned. As we wrote last October, one of Palm OS 6's focus areas is wireless communications, building support for all the major wireless standards, and providing a high level of security to protect communications over whatever connection the user initiates. Screen shots received by PalmInfocenter purporting to show Palm OS 6 in action, shows a virtual Graffiti entry area with a wireless signal strength indicator alongside the battery level indicator, and an e-mail access button alongside the home, find and info buttons. It has yet to be confirmed whether the shots genuinely show Palm OS 6's look and feel, but if accurate it marks a significant improvement in the Palm UI. Palm OS 6 is also expected to be the first version of the operating system to contain technology developed at Be, acquired by Palm a couple of years back. Certainly, the OS is expected to feature a more advanced multimedia framework, and multi-processing and multi-threading support - all key features of BeOS. That's not to say Palm OS contains BeOS code, rather it utilises approaches to solving these problems uncovered by the Be team when they were working on the older OS. ® Related Story Palm OS 6 details emerge Related Link PalmInfocenter: Palm OS 6 screen shots?
Tony Smith, 08 May 2003

Radio Intel – future or fantasy?

Intel opened the doors on its radio work yesterday, allowing us an early glimpse into what it has promised will be a decade long project. Labs leaders said the culmination of integrating analog and radio into silicon would take that long, or at least - as they assured us several times - that they would have retired or left the company by the time it would be achieved. No other semiconductor company can afford to invest as much in R&D, and when so many companies view research as a short-term cost rather than an investment, this kind of long term investment is to be applauded. With the goals so distant, it was a commendably relaxed and open session, and when we spoke to Intel Fellow Kevin Kahn about the far-horizon work we saw from the Labs in the context of the nascent WiFi Bubble, he acknowledged claims were getting out of hand. "There's never been a technology that hasn't been overhyped," he told us. "Is it being overhyped? Of course. Do we contribute to overhyping it? On occasion, probably yes. But there are real things in engineering terms to be achieved too." Indeed, Intel didn't fudge the scale of the task ahead, and we'll go into these challenges in a moment. But Intel has decided it needs more spectrum to work with, and here Kevin's the man. Deregulation As well as overseeing all Intel's RF work, Kahn serves on the FCC Technological Advisory Council, chairs the National Science Foundation's Engineering Advisory Council, and is Intel's point man on regulatory issues. He told us he had found Powell far more conducive to suggestions that the FTC regards the spectrum scarcity as 'artificial'. But he didn't find the fanatical deregulation lobby - the all-or-nothing 'Open Spectrum' ideologues - particularly helpful either, although they did raise awareness of the issue. (Here's an FAQ summarising the Open Spectrum lobby case.) "Some people have a quasi-religious belief in it, ours is more pragmatic," he told us. He thought it "reasonable" that incumbents - broadcasters, wireless carriers - should be respected, which is nice, but the case had to be made for new applications. At one point Kevin referred to the objectors to spectrum deregulation as "antibodies", and said that it wasn't practical "to wipe the slate clean" - presumably wiping away those antibodies. But probably far more useful to the cause than 'Open Spectrum' is Module Certification. As Jeff Schiffer, co-director of Intel's wireless interconnect lab pointed out, allowing certify-once and use-anywhere would be pretty useful to vendors. Kahn has been taking his case to the Europe and Japan, participating in ETSI proceedings and talking to the European Commission. Research The endgame, says Intel, is "reconfigurable, intelligent CMOS radios". The first step will be an integrated radio chip that's cheap enough to be appealing. Radios are very, very low cost already thanks to dedicated baseband processor and DSPs. Intel thinks it will be possible to produce an integrated chip that's cheaper, which is probably the claim that was met with the most skepticism yesterday. How cheap? Nathan Brookwood, principal analyst at Insight 64, pointed out that "analog is not amenable to Moore's Law", and radios are already very cheap indeed: the disposable handset is almost upon us. The technical challenges fall into two strands (regulation is cited as a third). Which is, integrate the multiple frequency bands onto one die; and create a re-usable baseband architecture. The latter is currently performed by discrete DSPs, and explains why DSP pioneer Texas Instruments has its own roadmap to a one-chip wireless device. There's a lot of research going into smart antennas and into analog to digital conversion. For the former, Intel showed a demonstration of a four-antenna laptop which effectively doubled the range of an 802.11 signal. These will be pretty big chips, but Intel thinks it can achieve the flexibility of a FPGA at very low cost. In, for example, what Intel calls RCA (Reconfigurable Communications Architecture) the processor is designed to configure itself to run different protocols on demand - essential given the plethoria of 802.nn wireless standards. We look forward to a closer look a the technology. This was the first time much of this had been unveiled, and Intel seems keen to let us take a closer peek. Cost remains the ultimate challenge. Even one-chip baseband seems to meet with a lot of industry skepticism: perhaps the sheer volumes simply do not favor Intel, here. Economics and recent history suggest that putting all you can on a chip - maths co-processors, peripheral interfaces - is inevitable, but that's only the case if the integration costs create a real saving. With ubiquitious DSPs, this isn't necessarily the case. Expectations Actually, we reckon there are some technologies that achieve mass acceptance without much hype. The combustion engine was one, and the humble cellphone was another. Both reached global consumer acceptance - thanks to Ford in the first case, and to co-operation between vendors and regulators in the second - and the 2G phone has only recently been hyped, in retrospect. Intel's biggest problem - it realizes this - is persuading the regulators that something needs to be done. The most telling moment of the 'Open Spectrum' Summit organized by Lawrence Lessig in February (notice how the words "Free" or "Open" are obligatory buzzwords) was when John Gilmore stood up to remind the conference that no matter what was decided in the room, nine-tenths of the world would ignore it. That's because in much else of the world, there isn't perceived to be a spectrum problem, so there isn't a demand for a fix. (This is where the words "Free" and "Open" play their part). Given all the obstacles, you might surmise that Intel is crazy to pursue radio. Then again, given the facts that volumes will be with personal and M2M (or machine-to-machine communications); that it can invest for the very long term, has very good engineers and R&D often brings about unintended consequences, it would be pretty crazy not to try. ® Related Story Will the WiFi Bubble hypesters kill WiFi?
Andrew Orlowski, 08 May 2003

Sony ‘very comfortable’ with $199 PS2 price point

Responding to mounting speculation of an imminent price cut, SCEA president Kaz Hirai has told news agency Reuters that he is "very comfortable" with the $199 price point, and criticised game bundling moves by competitors. "We are very comfortable at the $199 price point... The numbers are very healthy for the PS2 at the $199 price point," Hirai said, Reuters reports. Price drops to all three consoles had been widely expected to take effect around E3 this year, just as they did last year - when Microsoft and Sony slashed their hardware prices $100 to $199, and Nintendo cut the Cube from $199 to $149. However, Hirai seemingly doesn't believe that a PS2 price cut is necessary at this point. He also criticises Nintendo and Microsoft's current policies of bundling software for free with their consoles - stating that he doesn't believe that these moves were really helping sales. Industry reaction to Hirai's comments has been mixed, with Take Two CEO Jeff Lapin saying that he hoped to see significant price cuts at E3, but conceding that "if I'm Sony, I'd never want to cut the price"; while THQ chief executive Brian Farrell pointed out that the pricing of the console is unimportant as long as Sony can meet its installed base targets for the year. "Whether Sony gets it at $199, $179 or $149, we're indifferent," he told Reuters. "All we need is that 10.5 million units." The pressure on Sony to drop its prices in Europe is higher than in the USA, as here it is by far the most expensive of the three consoles, unlike the USA where the PS2 is sold at the same price as the Xbox. A recent price-cut by Microsoft in Europe saw the Xbox reduced to £129.99, leaving the PS2 retailing £40 more expensively at £169.99. © gamesindustry.biz
gamesindustry.biz, 08 May 2003

m-Payments for Edinburgh parking meters

Irish company Itsmobile has signed a major new deal to sell its parking meter payments service, mPark, to the city of Edinburgh in Scotland. Itsmobile said the deal marks the first time a wireless parking payments service has been deployed in the UK. Similar to the service launched in Dublin in January, the wireless payments system is available from pay and display parking meters in Edinburgh city centre. To use the service, motorists ring a national number displayed on the parking meter. An interactive voice response (IVR) system lets the motorist input the machine's ID number and his or her own mobile PIN number. The Itsmobile system then sends a remote instruction to the parking meter, which prints out the adhesive parking ticket in the usual way. Parking charges will then appear on the motorist's credit card; Royal Bank of Scotland (RBOS) customers also have the option of having payments debited in real time from their bank account, using the bank's FastPay service. The service, launched last year by the bank, lets its customers set up a dedicated FastPay account, which lets them send payments by text message or e-mail. To use the service, customers will need to register at www.fastpay.com and transfer money into a FastPay account. Parking meter payments will be debited from this account in real time when customers use mPark. Itsmobile will receive an up-front payment in the Scottish deal, as well as a small portion of monies taken in through the mPark system. The company wouldn't discuss the details of the deal, but its total worth to the Irish company is understood to be a six-figure sum. Itsmobile said the service would be fully rolled out in Edinburgh during 2003. The company plans to announce another significant UK deal in the coming weeks, as well as its first major deal outside Europe. Itsmobile is also targeting Asia and North America with its technology, which can be used to interface with ticketing box office systems and vending machines. The company's parking meter payments system is becoming an easier sell to city councils as time goes on, according to the company's operations director, Donal McGuinness. "Selling the concept is not a problem anymore," he said. "Most of all, cities like that there's less cash in the system, which reduces their cash collection costs. Also, from time to time people do steal parking meters. Any time there is cash in the system, money goes missing." mPark addresses this need, he says, and lets cities move to a cashless system without the expense of putting a smartcard infrastructure into place. McGuinness said that in Dublin, the expansion schedule for mPark will be decided by Dublin City Council, but he said he hoped mPark would cover all 450 meters in Dublin's yellow and red parking zones by this autumn. © ENN Related stories Mobile commerce for all . The Parking News effect
ElectricNews.net, 08 May 2003

‘Monopolistic’ BT kicked where it hurts

BT was left with little option but to cut the cost of its Datastream wholesale broadband product - or face regulatory action from Oftel. Confirmation that BT was effectively threatened by the regulator has given heart to rival operators to continue to pursue actions against the dominant telco. In a statement Oftel boss David Edmonds said: "Our initial findings were that these price changes, relative to the Datastream product, could have prevented other operators from competing to provide broadband services to Internet service providers. I therefore held urgent discussions with BT, and I asked them to make reductions in the price of the Datastream product. "Maintaining the commercial viability of the Datastream product is crucial if other operators are to be able to compete fairly in broadband connectivity. He added that the price cut announced by BT "is of the same level as I would have imposed using my statutory enforcement powers at this stage of the investigation". For its part, BT is playing down the cut by insisting that it was made following "representations from customers" and "talks with Oftel". But an Oftel spokesman told The Register that BT's decision to not cut the wholesale cost of Datastream "did constitute a margin squeeze". "Unless BT did something about it, Oftel would have taken regulatory action," he said. The row began last month when BT cut the cost of its wholesale IPStream product, which provides an end-to-end ADSL service solely using BT's network. Rival telcos argued that the cuts did not apply to DataStream products, which use competing national networks from alternative, rival carriers. They complained to Oftel accusing the dominant telco of anti-competitive behaviour and margin squeeze - something with which the regulator agreed. Despite this initial victory, rival operators claim the price cut is not enough and are urging Oftel to maintain pressure on BT. Ian Hood, a director at Thus, said: "The price cuts announced by BT yesterday appear to be the minimum required to prevent the immediate imposition of a provisional order by Oftel. "We do not the believe the price cuts go far enough but we welcome Oftel's rapid intervention into what we see as yet another abuse by BT of their market dominance. "We look forward to further changes to the DataStream product, including additional price cuts, upon the completion of Oftel's investigation." His words were echoed by Energis chief exec John Pluthero: "This is another illustration of BT’s monopolistic behaviour and failure to open up wholesale broadband to full competition. "BT have to be compelled to give ground yard by yard and their late, reluctant and modest price reduction does nothing to redress the competitive balance. "The complaint is still open, we urge Oftel to conclude it rapidly and insist on a full price reduction to Datastream." Oftel is expected to publish the findings of the full investigation next month. ® Related Stories BT backtracks on broadband pricing cuts MP critical of 'unfair competition' from BT Two more telcos run to Oftel over BT BB 'margin squeeze' Oftel could block BT's ADSL price cut Thus complains to Oftel over BT ADSL 'margin squeeze' Tiscali blasts BT's 'anti-competitive' ADSL price cuts
Tim Richardson, 08 May 2003

Telewest email hit by spam attack – again

Telewest has been hit by yet another spam attack leading to delays in its email service. The attack happened at around 8.00am (BST) this morning and means punters could have to wait up to two hours to receive their email. According to Telewest, the attack isn't as bad as one a week or so ago which led to the cableco's customers being without email for a couple of days. It seems action taken then to install new kit helped take some of the sting out of the latest spam onslaught. In a statement, Chad Raube, director of internet services at Telewest Broadband, said: "Our blueyonder customers may be experiencing a delay, of up to two hours, in receiving their emails at the moment. We experienced a spam attack this morning that generated a backlog of emails on our servers. "It follows hot on the heels of a relatively major spam attack, last week, and hardware we implemented to help tackle that incident has helped us proactively manage the situation this morning." The service is expected to be back on its feet again later today. ® UPDATE: Since posting the story a number of readers have told us that some of their email is taking up to five hours to get through. A spokesman for Telewest confirmed this saying that delays up to five hours were only affecting a small proportion of punters. The average delay in getting email is between one and two hours, we been told. Related Story Telewest email halted under massive spam attack
Tim Richardson, 08 May 2003

‘Athens’ – MS defines the next Windows PC standard

OK, reality check. Your PC is fundamentally unreliable and broken. It doesn't switch on immediately and stay on, working, like your stereo, TV, fixed line or mobile phone, granny can't just play music and videos, stay connected to the Internet and collect her emails without regular calls to tech support (i.e., you), and in addition to making annoying whooshing, whining and rattling noises the PC is prone to various hardware components mysteriously dropping dead (or not so mysteriously, given that the noises and the failures are frequently connected). So having firmly framed that picture of PCs in the real world, let's hop over to WinHEC 2003, where this week His Billness was happily recommending integrating all of those pieces of equipment that do work into the one that doesn't, that wondrous single point of failure the evil Wintel twins have been failing to finish for nigh-on 20 years now. Last year's joint effort with HP, the Agora concept PC, has now morphed into Athens, and is headed for a business near you, probably in 2004, probably running Longhorn and probably also NGSCD ( Next Generation Secure Computing Base, aka Palladium). And classical scholars may join us in speculating that further geographical broadening of the codename could well mean rev three is tagged Attica. Non classical scholars should be aware that this has been a part of Greece for far longer than it's been a prison, however. So what is "Athens"? Microsoft pitches it as the next generation of productivity-enhancing PC for business, integrating telephony functionality and wireless with the PC, and using "Intuitive and consistent system controls [to make] the PC user experience more seamless." A part of this is Xeel, yet another friggin control interface (YAFCI), which is variously described (in the press release) as a "cluster of hardware components and software interactions builds on the success of the mouse wheel to simplify content navigation and provide consistency across Microsoft Windows-based devices" and (by Bill) as "a ruler that can be pushed in or moved to the left or right so it gives you four different directions and an action push, and then in some cases some additional buttons that let you switch in or out - back buttons, switch Window buttons." Whatever you say, Bill, but we expect we'll know it when we see it. Drilling down through the verbiage, Athens can be defined on several levels, depending on where you're standing: User's eye view: By attempting to route all of your phone calls via the PC it provides a useful personal communications centre which can scoop the CLI of incoming calls and kick up data on the caller, switch off your music when the call comes in, route calls to messaging when you don't want to be distracted, mix and match emails, voice, mobile, fixed line, Internet phone. Basically it aims to pull together a lot of stuff that's already available separately, and basically it's the sort of thing that will appeal to Microsoft executives and the less creative business brain, as a perceived productivity boost to what Microsoft terms the knowledge worker, and what The Register thinks more of as the excessively anal-retentive sales droid. Work the way you want to work? uh-uh. This sort of stuff is intended to make you work the way they think you ought to work, but as you can't risk objecting, they'll quite possibly buy it for you anyway. PC vendor's eye view: If customers see sufficiently compelling reasons to upgrade to Athens class PCs, then vendors will see a much-needed uplift in sales. But that's a big 'if'. Microsoft's research indicates that customers will pay for telephony features, but they're probably lying (no, Microsoft's research doesn't say that), because when push comes to shove businesses will likely conclude they've already got phones, and they might as well wait this one out at least till they're sure it works. There are however other more likely come-ons from the vendor's point of view. Athens has a big, 20in TFT, which in itself will not spell profit margin, because MS reckons these will be around $400 by next year, but at that price they'll be a good draw for Agora, and the rest can come clanking behind. Some of 'the rest' could provide useful lock-in strategies for vendors - according to Bill: "One of the things that's less obvious about the display is the work that's gone into consolidating a lot of the components and cables that normally clutter your desktop within the display itself. This one cable from the CPU, which contains high-speed USB and video, connects a slim form factor drive bay, USB speakers, slim and array microphone, camera and a Bluetooth transceiver that drives this rechargeable wireless keyboard that recharges right here in the base, wireless mouse and a cordless telephone handset." So you can see vendors being able to do some form-factor lock-in here, and depending on how much freedom they have in terms of lights (the display includes 'you have mail' alert lights and similar) and buttons, they can possibly lock in users here as well. More likely, though, Microsoft will keep a tight rein on these aspects, and will therefore end up confiscating such elements of individualisation that PC vendors currently employ. Microsoft eye-view: This is another Windows-only design. Bill puts it (talking about XEEL, we think, but would some brave person please tell him to take public speaking classes?): "This is something that we'd love to see people trying out. It simply comes along with Windows, the right to use it and do it this way, and so I think this is key frontier is a similar user interaction across the different devices." Fortunately, the white paper is considerably clearer as to the nature of the spade: "The ultimate goal is an innovative new platform that incorporates these new features and that is closely aligned with the Microsoft Windows operating system." As with the Media Center PC, there's precious little chance that rival software vendors will actually want to provide non-Windows alternatives for the platform, because only Microsoft actually has the delusion that the client PC ought to be the centre of everything, and only Microsoft has the desperate need to make this so. Microsoft does however have the power to make fundamentally dumb ideas into industry standards, and that's clearly a worry. In his keynote Bill claims Tablets and Media Centers have sold very well, which may or may not be true, but in the Athens supporting white paper Microsoft presents stats that show a worldwide fall in enterprise PC shipments in 2001 and 2002. One could postulate that Tablets and Media Centers have done better than the vanilla equivalents, on the prosaic basis that they've got extra stuff, so while the PC vendors are still getting hammered, they have probably noted they're getting slightly less hammered if they build the proprietary MS stuff than if they build vanilla, or a home-grown alternative that'll have to run without the MS marketing machine's backing. Sure in the long term they'll just make it worse by building Microsoft's third proprietary PC design, but a long term of any description is something of a pious dream for most PC vendors, so they probably don't worry about that any more. If they didn't intend to be here today, they should have taken the rings off and thrown them in the fire years ago, and as Macbeth had it, "I am in blood stepp'd in so far, that, should I wade no more, returning were as tedious as go o'er." And the consequent increased prevalence of PCs sporting the extra gizmos opens the door for a traditional Microsoft standards-setting tactic: "To drive early adoption, influential end users and IT managers must be able to experience the value of telephony integration at a negligible cost and risk." Which is pretty much a statement of the old 'get the stuff in the hands of the users then drive standards from the bottom up' gag. Customer eye view: For those few who're paying attention, the gradual appearance of communications control centres you don't control out on the peripheries of the network ought to set off alarm bells. Microsoft says: "For broad adoption of PC telephony integration in the long term, solutions must leverage existing investments in telephony infrastructure," but you'd probably do well to consider that as meaning 'leverage into the PC,' possibly presenting you with additional support costs for a system that promises integration benefits, but that initially works less well than the centralised systems you already have. Tech eye view: There's interesting, possibly very important, stuff here that Microsoft doesn't entirely foreground. It has finally, belatedly, dawned on Redmond that there are several things about PCs that users find intensely irritating. If the power goes off, you lose stuff, if you switch it on, it takes forever, so you leave it switched on all the time, but it crashes and makes a hell of a racket. So, Athens switches to a two stage power model, on/suspend, rather than a three stage one/suspend/off. So with suspend effectively consituting the new official 'off' state this is intended to give you a switch on time of just a couple of seconds. This ought not to be a major technical challenge, but as Wintel has now been conspicuously failing to deliver this in portable computer for many years, one has ones doubts. Athens will also have a backup battery capability that will allow an emergency suspend to disk in the event of power loss. All this is nice on paper, and actually is more popular with the businesses Microsoft surveyed than the telephony stuff, but again, the historical failure to deliver reliability in such systems in portables makes one wonder why it's going to be any different this time around. Most important, though, is that it begins to point desktop computer standards in a different direction, and when you add noise abatement to power management, the new direction begins to look serious. Says the white paper: "Quiet components, alternative cooling technologies and other sound-dampening techniques reduce acoustic emissions that interfere with audio and affect user productivity. The 'Athens' PC design goal for acoustic emissions is 30 dB or less in A-weighted sound pressure (as defined in ISO 7779 and ISO 9296) while all components except the optical drive are operating." As anyone who has tried to quieten down a high performance PC knows, just muffling existing components is not enough. You can, as would seem logical for Athens footprint machines, cut some noise by using portable computer components, and then your major challenges will likely be CPU, graphics card and CD/DVD drive. Fixing the first two without unacceptable performance degradation is becoming more viable, but we can presume Microsoft will now be pushing Intel and the graphics companies harder in this direction. The third can at least be muted by some dampening, and by putting sufficient thought into case design and component layout to minimise sympathetic vibrations. Overall, though, we should expect Microsoft to be accelerating the move towards portable-class technology on the desktop, and further slowing down the Wintel 'megahertz at any cost' development route. Even this late in the day, we should perhaps welcome a sinner saved - anybody else remember the IBM PS/2e? Bill has more in this department - aside from the clear imperative to get a lid on the CPU packaging, he talks about offloading processing into other components. We suspect that his seizure of the term "parallelism" to describe this does not entirely mean parallelism as we know it, but it does clearly mean the CPU becomes less important: "... the increase in the graphics processors where the number of transistors now is almost as high as the processor, and in a few cases even more. It's pretty phenomenal, and it's really explained by the fact that we understand parallelism better in the graphics realm, those problems have many different units executing at once, we understand that better than in the general purpose code execution realm." "This idea of parallelization is becoming increasing important. In fact, as we see these clock rates moving up into the 10 and 20 or even 30 gigahertz range there is a big challenge where the performance will not necessarily continue at the same because of the huge challenges you get as you run up at those incredible clock speeds. There are some clever ideas to reduce leakage, to have advanced cooling systems that Intel and many others are pursuing. For Microsoft, part of our contribution to this is to make sure that whenever something can be parallelized, we allow the tools to make it easy to express that, because having multiple cores running at very high speed is cheaper than trying to get up to those big clock rates." "So the same thing we've seen in graphics with the parallel capabilities exposed through DirectX, we now need that for more mainstream general-purpose code, and we have some very interesting ideas of how to take our CLR execution environment and have declarations that allow you see the parallelization opportunities and on the systems that have that we run very fast in parallel. If you don't have that, fine, you still get the same results; you just don't get the parallel execution." Interesting, almost completely in English (well done Bill, you can do it when you try), and of strategic importance, we'd hazard. Note that practically all that Microsoft, the well-known software company, has been saying about Athens is about hardware directly, or by implication about the 'down and dirty' hardware management aspects of the software. For the PC industry strategically these are certainly the most important aspects, but as far as the actual 'communications centre' sales pitch for Athens goes, there's very little detail. Quite a lot of work will be needed on UI and integration, but it is not entirely clear what that work will consist of. Will it need the new file system? It will assuredly be stuff that's in Longhorn, and it will, Microsoft will no doubt be assuring us in the run-up to launch, be magnificent. But we detect bits that must currently be missing, and that will be hard, if not impossible, to execute by 2004. Bluetooth, for example, is viewed as important, the survey data indicates that users would like to use their mobile phone with the system (the pictured wireless handset certainly looks very horrid indeed), and Bill envisages mobile phones roaming across wi-fi (yes, he's been bitten too): "We need to get quality of service across Wi-Fi for things like voice connections so even in a portable phone, when it goes into your home, will roam to the Wi-Fi instead of using the wide area network, be connected in and get the advantage of the integration, and be able to send those voice bits across the broadband connection at lower cost." That's hard, particularly as it involves dealing with mobile phone manufacturers who're not going to listen to you. But the white paper perhaps gives an indicator of how far down that road we are, or not: "Microsoft is also investigating technologies to enable 'wake on ring' for USB and Bluetooth telephony devices, which would allow the integrated telephony of the 'Athens' PC to receive calls even when the PC is in standby mode." Which does kind of sound like an essential thing it doesn't quite do yet, and no doubt there's more of that. NGSCB: This is likely to be available for Athens largely because business is the softest entry point for rights management; there are perfectly valid reasons why, from a business perspective, machine identity and ownership are important. So if Microsoft didn't try to get it out there are Athens stage it would be missing an opportunity. NGSCB is not however covered specifically in the Athens material. Instead, Microsoft postulates smartcard security plus fingerprint biometrics (cue panic about XP sending your fingerprints to Redmond as well). Why do you need a smartcard if you've got working biometrics? Possibly, Microsoft accepts that while biometrics will be ultimately more effective, they could turn out to be a sales negative. Personal ID security is however obviously complementary to DRM security, and (biometric-related revolts permitting) will be a sales positive for business customers. ® Related links: Bill's keynote Athens white paper (Word format)
John Lettice, 08 May 2003

Dell anoints UK boss

Dell has looked to its own ranks for a UK boss. He's called Bill Rodrigues, he's American, and he replaces Brian McBride, former Northern European veep, who tipped up at T-Mobile in February. Rodrigues gets to run a 1,000-strong business, apparently Dell's second biggest country operation. From the sound of it Rodrigues is an enterprise man, as he will also assume responsibility for all global customer accounts based in EMEA. He was promoted from general manager of Dell's education and healthcare sector business. And prior to that he was an IBMer for 21 years. His battle honours include general manager for global education in North America, general manager of IBM's AS/400 brand marketing in North America, and executive aide to Lou Gerstner. Rodrigues will report to EMEA head Paul Bell. ®
Drew Cullen, 08 May 2003

Team targets 802.15.3 for wireless video networks

The latest attempt to get 1394 Firewire operating across a wireless network kicked off last week with the foundation of a working group seeking to tie the connectivity standard to 802.15.3 Personal Area Networks (PANs). The group's work could lead to 100Mbps 10m wireless links capable of maintaining multiple high-quality MPEG-2 streams. The group expects to have its specification done and dust in 12 months' time, EE Times reports. To date, wireless 1394 -'Firewireless' - efforts have centred on hooking the 1394 protocols up to 802.11 and before that proprietary transports. The 1394.1 spec. details how to bridge 1394 to 802.11a. The 1394 Trade Association's 1394-to-802.11 protocol adaptation layer (PAL) build on that spec. to deliver 1394 across an 802.11a network. The PAL supports the 5C (aka Digital Transmission Content Protection) DRM scheme designed to stop anyone sneakily copying commercial content while it's being transmitted over a wired 1394 link. An 802.11a network provides a working data throughput of 25-30Mbps. The addition of the PAL overhead reduces this even further from the WLAN standard's raw data rate of 54Mbps. 802.11a also lacks the quality of service (QoS) provision seen as essential for the transmission of commercial-quality video over a wireless link. Enter 802.15.3, a specification being groomed for IEEE standard status that provides ad hoc wireless PANs - short range (1-50m) and ad hoc, in other words. 802.15.3 builds on the 802.15 standard by adding QoS specifically to allow the PAN to carry digital imaging and multimedia data. It also builds in data security, implementing privacy and authentication services. 802.15.3 operates in the 2.4GHz band at 11, 22, 33, 44, and 55Mbps. Unlike 802.11 connections, 802.15.3 is designed for peer-to-peer operation rather than routing data through an access point, whether that's a base-station or a client machine configured as one. Access points can become network bottlenecks. The final spec. is expected to be submitted for IEEE approval in June. In the meantime, an alternative spec., 802.15.3a, is under development to create a higher data PHY to replace the 55Mbps 2.4GHz PHY in 802.15.3. It's increasingly likely that 802.15.3a will be based on ultra-wideband (UWB) technology, but it has to get through selection procedures this month and in July first. However, it has the potential to reach data rates of 100Mbps and ultimately the 400Mbps (at 5m) offered by standard 1394 wired links. The group developing the PAL for 802.15.3/3a expects to have a completed spec. in a year's time. Products using 802.15.3 are anticipated to be available during Q4, according to the WiMedia Alliance, a Wi-Fi Alliance-style organisation formed to promote consumer multimedia PAN-based wireless networking. It was set up last September by Eastman Kodak, HP, Motorola, Philips, Samsung, Sharp Labs of America, Time Domain and XtremeSpectrum. Many of them are members of the 1394 PAL-defining working group. Interoperability with 1394 is key to ensuring compatibility with wired devices and supporting consumer electronics interconnection schemes based on 1394, such as the Home AV Interoperability (HAVi) standard. It also provides, through 5C, the level of DRM that commercial content creators are insisting upon and the consumer electronics industry will undoubtedly demand too. It's what takes 802.15.3 beyond being just another network. If the WiMedia Alliance has its way, Wi-Fi will continue to be the standard for wirelessly networking computer systems, and WiMedia will become the standard for wirelessly networking home entertainment systems. Wi-Fi may have a considerable lead in mindshare, but it's popularity is based on computing applications - it doesn't have anything like the same profile in the consumer electronics space. If the working group's timetable is met, 802.15.3-based wireless consumer electronics devices are likely to appear before their 802.11-based equivalents, and will offer better performance in any case. However, with PC companies pursuing home entertainment-oriented 'digital hub' strategies, some clash between the standards is inevitable, particularly given the way 802.11 has been adopted by companies offering remote players that link your hi-fi to your PC-based MP3 collection, as demonstrated by products like Turtle Beach's Audiotron and others. More likely, Wi-Fi will emerge as a consumer electronics network standard too, co-existing with WiMedia as a cable-replacement technology, just as today Wi-Fi sits alongside Bluetooth, as network and cable replacements, respectively. Either way, there's no doubt the consumer electronics industry is looking to home networking, particularly wireless, as a way of tempting buyers with a whole new generation of products. ® Related Link EE Times: Group hopes to leapfrog 802.11 for wireless video Related Stories NEC unveils IEEE 1394 'Firewireless' home LAN Canon gets 1394 'Firewireless' up to 100Mbps
Tony Smith, 08 May 2003

Microsoft is no threat – Sage

Microsoft's assault into the SME business apps market is no threat - yet - to Sage, as the UK accountancy software firm is selling to much smaller businesses than its alleged rival. "The competitive landscape has not changed even though Microsoft entered our market two and a half years ago," Sage chief exec Paul Walker told the FT. He's been drumming home this message to shareholders and investments since 2001. But for how much longer? Microsoft may not have pushed hard for the small businesses which account for the lion's share of Sage revenes. But this time will come - the company is already investing $2 billion a year on building up its SME sector presence. Microsoft muscled into the SME accountancy software business with the acquisition of Great Plains Software in the US, and Navision, in Denmark. Last year, Sage tried to torpedo Microsoft's agreed $1.3 billion takeover of Navision, urging European regulators to veto the deal. Its call was rejected. Sage's Navision move shows what the company really thinks of the Microsoft threat. Sage yesterday turned in interim pre-tax profits up 14 per cent to £74.3 million on revenues up four per cent to £281.1 million. The company noted stronger performance in US and the UK than in Europe. But there are signs that the continent is picking up too, it says. The interim statement is here. ® Related Story Microsoft pumps $2bn into software for small businesses
Drew Cullen, 08 May 2003

High Velocity Networks Web host outfit goes titsup

Wiltshire-based Intensive Networks Ltd - which trades as High Velocity Networks and provides Web hosting for small business - has gone tits up. The company is being placed into liquidation and will cease providing a service for its 500 or so business punters from May 9. In a bid to minimise disruption the company is advising its customers to transfer their Web hosting to business ISP Business Serve. In a statement on High Velocity's Web site Michael Carr writes: "To avoid any disruption, we have entered into an agreement with Business Serve Plc who is in a position, and has agreed, to offer a continuation of service to all our customers. "In addition, I have agreed to work with Business Serve for a limited period, to ensure that there is no loss of service for any customer during this transition and to assist in a smooth hand over," he wrote. Mr Carr was not available for comment at the time of writing. ®
Tim Richardson, 08 May 2003

Severed cables wipe out Telewest service

Some 10,000 households in the South East are without their Telewest TV, phone and Internet services today after workmen severed two major cables The incident happened at around 1pm BST this afternoon. A spokesperson for Telewest Broadband said: "Contractors carrying out road works in the Tilbury area have caused significant damage to two of our major fibre optic cables. An engineer crew was sent to the site immediately and they are in the process of carrying out emergency repairs. The outage has hit up to 10,000 households in Tilbury, Gillingham, Maidstone and Basildon. The cableco is working to reinstate services in the area as quickly as possible. It's been a bad day for Telewest. Earlier today it got hammered by a spam attack that led to long delays in punters receiving email. ® Related Story Telewest email hit by spam attack - again
Tim Richardson, 08 May 2003

MSN ditches free photo storage service

MSN Photos - currently a free service that allows snappers to store their photos online - is warning punters to download the piccies or face losing them forever. It seems in the race to make money MSN is no longer willing to offer the service for free. Punters must either sign up to MSN 8 or remove the snaps before May 21. A statement on the MSN Photo Web site reads: "MSN Photos will no longer offer online storage of photos for users who are not MSN 8 subscribers. "If you currently have photos stored on our site, you can download them to your computer. The deadline for downloading your photos is May 21, 2003. "After this date, all your photos will be removed from our site and you will no longer be able to access them. We ask that you please take action today, so you don't lose your valuable photos." According to a spokesperson in the UK, "MSN is implementing this change to focus its resources on optimising and improving the overall photo sharing experience". She said that MSN research has shown that a "vast majority of MSN Photos users prefer to share their photos via e-mail and do not actively manage their stored photos online." ®
Tim Richardson, 08 May 2003

IBM's Unix servers ready on demand

IBM took a nice step this week to back up the marketing fluff behind its "On Demand" computing program with a new processor pricing system for its high end Unix servers. Along with a faster Power4 chip, IBM has extended its pay as you go pricing model for the chips and will even let users test out the new purchasing model for free. This move helps show IBM can keep up with competitors such as Sun Microsystems and HP with new high end tools and gives a glimpse of what on demand computing may mean. With its capacity-on-demand tool, users can purchase extra processor power in small increments to keep up with periods of high demand. On the high end p690 and p670 servers, users can pay to have four processors standing by. Customers can then activate the processors, two at a time, as needed and pay for the extra chips in 24-hour increments. IBM will typically let a customer turn on the chips for a certain number of days in a 60-day period. IBM has also brought this feature down to the lower end p650 and will sell extra capacity two processors at a time. Users can ask for a 30-day free trial of the program on any of the servers. IBM's pSeries Unix servers can detect processor failures as well and start up a redundant chip before the failing chip goes down. HP and Sun have similar tools, and each vendor likes to claim they can do this kind of processor provisioning the best. Sun, for example, already lets users pull out or insert processor boards while a server is still running and can have CPUs of mixed speeds in the same server. IBM can't match that. This type of feature points to a strong effort on the vendors' part to try and make their utility computing hype a reality. Despite their efforts, however, software makers hold the key with the technology in their pocket books. The ISVs have struggled to keep up with the systems companies and have not agreed upon a nice way to "per processor" price this type of offering. "The current state of software licensing will make this arrangement much less flexible than it could otherwise be," wrote Illuminata's Gordon Haff and Kevin Fogarty in a must-read note. "As the number of processors goes up so, typically, does the license cost of the third-party software running on the system. Unfortunately, few ISVs are presently able - or, possibly, willing - to offer the kind of variable cost arrangement that IBM is offering. "IBM's systems group is working with its middleware divisions to make its licensing as responsive to capacity on demand as its systems are. So for those that also run DB2 or WebSphere, say, some relief is in the wings. But, for the time being, customers will have to deal separately with their software providers on this issue." HP is delivering a similar message. Nora Denzel, the impressive VP of software at HP, said the company points most customers to Oracle, BEA or whomever to try and work out the pricing details on things such as multicore processors. "Software pricing is more of an art that a science," Denzel said in an interview. "If you have Oracle or Microsoft, the customers themselves have to deal with those issues, there are discussions happening to work this out." It looks like the hardware vendors will drag the software crowd kicking and screaming to more modest pricing plans for utility-style computing offerings and for the multicore chips starting to arrive on the scene such as Power4. Despite these problems, Illuminata reckons IBM is taking the right steps for its users. The analysts rank IBM ahead of Sun and just behind HP with their respective capacity-on-demand programs. It's also good to see the Unix vendors continuing to advance their technology at a steady pace, while many financial analysts say the Unix folks are stuffed. Unix servers are giving customers the easiest access to most of the early features present on the hardware vendors' utility computing roadmaps. In IBM's case, the Unix systems provide the best example that On Demand Computing actually means something. Microsoft and the Linux makers have some work to do if they want to get with the program. ® Related Stories HP turns to Darwin for help IBM x450: Stuck between Power and a hard place Exclusive: IBM's Itanic 2 server McNealy on Project Orion, Sun's Database hole
Ashlee Vance, 08 May 2003

$2 trillion fine for Microsoft security snafu?

Microsoft's latest security lapse with its Passport information service could trigger a $2.2 trillion fine on the company courtesy of the US government. Microsoft on Thursday admitted that a flaw in the password reset tool of its Passport service could compromise the information stored on all 200 million users. It scampered to post a fix and is looking into potential exploits, but the damage to Microsoft may already have been done. The Federal Trade Commission last year demanded that Microsoft improve its Passport security or face stiff fines of up to $11,000 per violation. Redmond promised to work harder to protect consumer information and launched it's Trustworthy Computing initiative to put regulators' minds at ease. Well, the FTC is looking into the Passport breach and could slap Microsoft with a fine of $2.2 trillion to cover all 200 million violated users. "If we were to find that they didn't take reasonable safeguards to protect the information, that could be an order violation," Jessica Rich, assistant director for financial practices at the FTC, told the AP. The flaw was discovered close to four minutes after security researcher Muhammad Faisal Rauf Danka set to work on Passport. He was able to access Passport accounts at will by typing "emailpwdreset" into a URL that has the e-mail address of a user account and the address where a reset message can be sent. A number of people claim to have exploited the flaw on their own accounts and those of friends. With permission from their comrades, of course. Microsoft sent out a warning by 8 p.m. last night and then plugged the hole three hours later. The company is very upset about the problem, as evidenced by Microsoft product manager Adam Sohn's comment to CNET. "Whatever," Sohn said. Actually, he did not say that, but his real remarks were less than compelling. "You live and learn," Sohn said. "We will obviously take a hard look to make sure that if something is sent through the nonstandard channels, and it is real, we are all over it." Live and learn? Can we afford to wait for Microsoft to crawl toward secure code or is password security one of those things we should learn to live without? ® Related Stories To patch or not to patch Kerberos Redux? Linux and DRM - succeeding where MS failed? MS mulls external testing for security patches
Ashlee Vance, 08 May 2003