1st > October > 2002 Archive

Don't buy computer games for a week

Consumers should avoid buying computer games in the run-up to Christmas as part of a campaign to secure cheaper prices for games. A group of gamers has set up "Don't buy a videogame week", which runs from 1 - 8 December, to protest at what they describe as the "rip off" cost of games. And they hope that if enough people take part it will make the games industry sit up and take notice. With 80 per cent of all games industry revenue coming from sales during the Christmas period, campaigners believe that a mass demonstration of consumer power might just make games companies think again. While many games cost as much as £45, campaigners claim that in reality games cost little more than 40p to make. "We're not saying 'Don't buy a game ever again'. All we're saying is, if we all demonstrate our power as consumers by refusing to buy anything in one particular week, the industry will have no choice but to sit up and listen," campaigners said in a statement on their site. "There isn't a single reason that games couldn't be sold at £20, or even less," they said. Those behind the campaign - which includes a clergyman, games journalists and developers - have also set up a Web site and an online petition to help their campaign for cheaper videogames. ®
Tim Richardson, 01 Oct 2002

BTo confirms plan to ditch unmetered Anytime service

BTopenworld has confirmed that it is to cap its AnyTime and SurfTime unmetered dial-up services. From November 1 AnyTime customers will only be allowed to stay online for 150 hours a month before they have to start paying Internet call charges. SurfTime customers get 120 hour quota. The changes will affect around 900,000 of BTopenworld's customers, who should receive an email from the ISP from tomorrow. According to Duncan Ingram of BTopenworld, it is the act of "a responsible ISP" that will help create a "sustainable business model". So, will BTopenworld change the name of its service from "AnyTime" to "Someofthetime"? Dunno. What will stop it from reducing still further the quota it has set? Nothing. In fact, it should be expected. After all, customers used to be able to use the service for around 16 hours a day before BTopenworld's heavy mob booted them off. Then this "limit" was reduced to 12 hours in every 24. Now, it's effectively more than halved just five hours usage a day. So will the price come down accordingly? No way. If that's the case, is this just another way for BTopenworld (BTopenwound to its friends) to improve margins, stamp out network abuse and create a bigger gap between it and broadband in the hope that punters will migrate to hi-speed Net access? Shinickle? Moi? ® Related Story BTo to ditch 24/7 unmetered service - sources
Tim Richardson, 01 Oct 2002

Overture wins MSN deal extension

Overture Services Inc, which sees its shares jump around whenever a new deal is announced in the online paid-search advertising market, yesterday said it has renewed its distribution deal with Microsoft Corp's MSN service. Its share price increased 7% on the news. There had been a question mark over the relationship since July, when the companies said they had temporarily extended the deal while they negotiated for a longer renewal. Some observers expected to seen Google Inc muscle in on MSN, or for Microsoft to take the feature in-house. The deal that sees Overture provide paid search listings to the "search pane" of the Internet Explorer browser has been extended until December 2003, and the deal to feature listings in the MSN search site until December 2004. The two deals were to expire yesterday and in December 2003, respectively. Overture aggregates links from thousands of advertisers, each of which bid to have their links featured prominently against selected keyword searches. Overture splits the cash-per-click with its portal partners, including Yahoo! Inc, AltaVista Co and United Online Inc. The cash paid to partners is accounted for as a "traffic acquisition cost". Overture said yesterday that its July 2002 forecast of a 60%, rising to 62%, traffic acquisition cost this year and next will be unaffected by the MSN deal. This suggests it was not obliged to significantly increase MSN's slice of the pie in order to win the renewal. The deals have also been expanded to include MSN properties in Germany and France, as well as the US, Canada and UK. The companies have agreed to test the service in Japan, where MSN has a presence but Overture has not. Overture expects to launch in Japan some time this quarter as a result of the deal, one quarter ahead of schedule. © ComputerWire
ComputerWire, 01 Oct 2002

Microsoft buys Liquid Audio DRM patents

Microsoft Corp has got its hands on a suite of digital rights management patents, but one of the companies most likely to be concerned by the move talked down its importance. Microsoft said it is to pay $7m for Liquid Audio Inc's US an international patents, believed to be around 20 or over in number. Liquid, which is currently being acquired by Alliance Entertainment, will get a royalty-free license to continue to use the patents. The US patents cover technologies such as digital watermarking, content distribution, audio encoding, loss-less compression and transferring audio to digital playback devices. Microsoft already has a number of patents of its own development in similar fields. "DRM is a strategic area we've invested in for the last few years," said a Microsoft spokesperson. "The patents will help us realize our long term vision for DRM technology." He declined to outline the vision in question. While the company has been developing its own DRM technology, which is vital to winning the confidence of the big movie and music producers, it has been accused of being late to the party and has been sued for patent infringement. Microsoft is embroiled in extensive litigation with InterTrust Technologies Corp over alleged infringements of InterTrust patents in a wide variety of products including Windows Media. But InterTrust said the new Microsoft patents do not concern it. "It's a mixed bag of patents Liquid Audio has been shopping around for a while," said InterTrust VP of business development Talal Shamoon. "It's a bit of a stretch to call them DRM patents... a patent is only as good as its claims." Shamoon said Microsoft will have a hard time suing InterTrust based on any of these patents, as InterTrust no longer sells products, having focused on licensing its intellectual property at the beginning of the year. He added that he did not believe Microsoft could use the patents to defend itself against InterTrust's lawsuits. A Microsoft spokesperson said: "This [acquisition] is not related to those lawsuits." © ComputerWire
ComputerWire, 01 Oct 2002

HP fires patent lawsuit, EMC fires back

The Register has a new Enterprise Storage ChannelThe Register has a new Enterprise Storage Channel The lawyers have been called out once again in the storage industry, this time on behalf of Hewlett Packard Co and EMC Corp which traded patent-infringement lawsuits yesterday. HP threw the first punch, filing a surprise lawsuit yesterday morning that named seven patents from the mid-nineties that it alleges EMC has infringed. HP fiercely rejected EMC's allegation that the move was simply a publicity stunt. If that were true, the effect of the move was at least partly neutralized by EMC's lightning-fast response. EMC, which said that HP's legal assault came "out of the blue," retaliated within only a few hours by launching a counter-suit alleging that HP had infringed six of its patents. Both sides are seeking full satisfaction in the form of monetary damages and injunctive relief - ie a court order forcing the other side to remove products or technologies from the market. EMC and HP have said that the lawsuits do not have any bearing on the companies' API-swapping deal that was signed last year. HP named EMC's Symmetrix and Clariion arrays, and its TimeFinder software as infringing HP patents. From the other side, EMC did not name the allegedly offending HP products, but said the patents concerned covered technology in its SRDF and TimeFinder software products. HP's lawsuit contrasts with EMC Corp's status as a long-established supplier of very high-end storage technology, and HP's track record as a developer of only mid-range storage technology. EMC said that HP gave no warning of the lawsuit, and made no attempt to strike an out-of-court settlement before it was launched. Referring to its own lawsuit launched against Hitachi Ltd earlier this year, EMC said that it had been in negotiation with Hitachi for three years before resorting to legal action. HP entirely rejected EMC's suggestion that its suit was simply an attempt to seek attention. "That's a pretty outrageous thing to say when they're clearly infringing on somebody else's intellectual property. It's a Clinton-esque statement," a spokesman for HP said. HP's suit was filed in the Northern District of California. The patents it names cover topics including data transfer between storage media, reducing the number of read and writes in RAID, switched storage networks, and the handling of RAID disk failures. Four of the seven patents named in HP's announcement of its lawsuit originated between 1992 and 1995, from Digital Equipment Corp, Compaq Computer Corp, and HP itself. A fourth, concerning AC mains power selection dates from 1996, and was first filed by HP. The numbers listed by HP for the other two patents refer to non-storage related technologies, according to the US Patent Office web site. EMC's suit was filed in the US District Court in Worcester, Massachusetts, and the patents it refers to all originated from EMC Corp, between 1993 and 1997. For HP to achieve its stated aim and win a court injunction forcing EMC to disable some features of its products, HP's lawyers would not only have to prove have to convince the court that an injunction would serve the public's interest. That would be as well as proving that its patents had been infringed. which would not be an easy task - as David Hill, analyst at the Aberdeen Group pointed out. "With all the commingling of technologies that has happened it's very hard to separate which technology belongs to which company. It's a very complex matter to prove patent infringement," Hill said. Elsewhere in the storage industry, EMC and Hitachi have patent infringement lawsuits pending against each other, and McData Corp has a patent suit pending against Brocade Communications Technologies Inc. © ComputerWire
ComputerWire, 01 Oct 2002
cable

Clustered Linux shines on commercial TPC-C test

The Register has a new Enterprise Storage ChannelThe Register has a new Enterprise Storage Channel Hewlett Packard Co, which has become perhaps the staunchest supporter of the TPC-C online transaction processing benchmark, is the first vendor out the door testing Oracle Corp's 9I Real Application Clusters (RACs) clustering software, Timothy Prickett Morgan writes. RACs, unlike the generic clustering technology used in technical and supercomputing environments, is aimed at supporting clustered commercial databases and the applications that feed off of them. In the mid-1990s, Unix did not scale as well or as far as proprietary midrange or mainframe environments. To build big Unix boxes, companies had to rely on Oracle Parallel Server, a cluster-enabled version of Oracle's database, and this program was used at many big Unix installations that had what might otherwise be called parallel supercomputers (like IBM's RS/6000 PowerParallel or Compaq's AlphaServer TruClusters) as their database servers. Oracle 9i RACs is available for Unix, Windows, and Linux environment and drives some of the clustering right down into the database itself. This means the operating system and application do not have to cope with it in the same way that Oracle Parallel Server forced them to. A few years ago, Compaq provided Oracle with some of the core technology for RACs from its TruCluster extensions to the Tru64 Unix operating system it sells on its AlphaServer line. This is why HP is interested in showing RACs off on its ProLiant iron. The company could have easily shown RACs running Windows. Today, Windows and Linux are very popular platforms for deploying applications, but they do not scale very well. Moreover, although IBM Corp, Unisys Corp, NEC Corp, and a few others offer big Wintel servers, the most tried and true Intel-based iron using commodity components only scales to four or eight Pentium III Xeon, Pentium 4 Xeon or Itanium 2 processors (it depends on the chip and chipset). This is why Oracle even bothers with RACs at all. Until Windows Datacenter Server editions are widely used and demonstrated to be scalable and until Linux is built to extend beyond eight processors (which Red Hat says it can scale to with its Linux Advanced Server), then clustering is pretty much the only way to take one monolithic Unix and proprietary machines in the data center that are by and large the chosen hosts for commercial applications at medium and large enterprises. To prove that Linux is an option in the data center using clusters, HP tested an eight-node cluster of ProLiant DL580 servers, which use the Profusion chipset co-developed by Compaq and Intel and which can scale to eight Pentium III Xeon processors in a single system. The DL580s that HP tested used the 900MHz versions of the Pentium III Xeon processors, each equipped with 2MB of L2 cache memory. Each node had 4GB of main memory, yielding a cluster with 64 processors and 128GB of main memory. This cluster is roughly equivalent to a Unix server of two years ago in terms of the raw oomph it has under its metal skins, and it provides about the same level of performance of a Unix server from two or three years ago. However, the bang for the buck on this Linux cluster, which had very modest discounting, is in the same ballpark as heavily discounted enterprise Unix servers running in monolithic mode supporting one database instance. The HP hardware supporting the clustered Oracle databases cost just over $1.4m and the server software cost another $1.2m. The eight licenses of Red Hat Advanced Server cost $6,400, and two additional years of maintenance cost $12,800, so if you think Linux is free, guess again. Oracle cost $20,000 per processor, the RAC extensions cost $10,000 per processor, and partitioning options required for the test cost $5,000 per processor. With 16 application servers and networking infrastructure, the cost of the total TPC-C Linux-RAC configuration cost just under $3m. HP and Oracle offered a net 20% discount on the configuration, with a big portion of it coming from HP because it is a so-called large purchase acquisition with a significant cash outlay. The resulting bang for the buck for the Linux-RAC cluster came to $17.21 per transaction per minute for a throughput of 138,262 TPM. To get down to that price/performance level, Unix vendors have to give a 40% or 50% discount. While companies may not want to move to Linux-RAC for their production applications because of the complexity that clusters involve, keeping a Linux cluster in the corner - or simply threatening to acquire one - might be the best way to keep the big Unix vendors offering to real customers those steep discounts they offer on their TPC-C tests. Real Application Clusters are real vendor motivators, if used properly as part of a bidding strategy. Incidentally, HP says that soon it will ship pre-installed and pre-configured RAC setups using Linux and Red Hat Advanced Server. The TPC-C test says the hardware in the configuration tested is available now, and that the rest of the configuration will be available in early March. That's probably a good guess as to when these integrated HP-Linux-RAC machines will be widely available through HP directly and through its channel. © ComputerWire
ComputerWire, 01 Oct 2002

Yukon take Microsoft back to developer roots

Features that anticipate and complete a developer's next programming move are slated for Microsoft Corp's Visual Studio.NET Yukon, billed by one executive as a "return to the company's roots", Gavin Clarke writes. Redmond, Washington-based Microsoft told ComputerWire Visual Studio.NET Yukon would "dramatically simplify" application development, offering greater use of IntelliSense to make the integrated development environment (IDE) smarter. IntelliSense, used in other Microsoft products like Internet Explorer, automates activities such as correcting mistakes. Microsoft told ComputerWire it might expand this to ask a developer up-front what they are trying to achieve and to automate tasks. Yukon is due in 2004, after launch of Visual Studo.NET Everett due in the first quarter of 2003. Details of both have been sketchy although Everett is regarded as a relatively minor release, fixing bugs, improving performance of code and adding the Compact .NET Framework - now in beta. Microsoft said Visual Studio.NET Yukon would feature the majority of improvements and new features. Details have been scant, given the relatively distant release date, and the company has just initiated its familiar "requirements gathering phase" when it consults customers on features. But Chris Flores, Visual Studio .NET product manager, told ComputerWire that with Yukon "we are really going back to our roots, dramatically simplifying development." "You will see the environment get smarter, you will see the environment get more participatory," Flores said. Yukon is scheduled to support the next version of Microsoft's SQL Server database codenamed Yukon. Microsoft has promised integration of its .NET Framework in the database - an integral part of Visual Studio.NET - and the Common Language Runtime (CLR). Yukon will - theoretically - enable tables and applications to be programmed in languages other than TSQL, such as C Sharp. Despite the changes, Microsoft is anxious to avoid ruffling developers' feathers. Prior to Visual Studio.NET 1.0, members of the 3.5 million-strong Visual Basic developers community objected to proposed changes in their language that took them closer to C/C++. Flores said the biggest thing Microsoft learned from that experience is to listen to developers. He added Microsoft's roadmap would keep its languages separate, avoiding increased convergence. That's something one Visual Basic user would rather not see. Developers for the Central Bank of Costa Rica said they would like to generate HTML from a comment, a feature found in C Sharp but not Visual Basic.NET. Generation of HTML provides for collaborative working online. Old loyalties appear difficult to change, too, as programmers stick with their favored Microsoft programming language rather than experiment with new languages. The Central Bank of Costa Rica last week announced it had migrated 1.3 million lines of Visual Basic 6.0 code to Visual Basic.NET on its high-transaction electronic payment and transaction system. The system handles 100,000 transactions involving more than $500 million each day. The bank said Visual Basic.NET offers improved performance and simplified data integration though native use of XML. It remains wedded, though, to Visual Basic.NET, with no plans to use C Sharp. One C/C++ programmer said he was a life-long loyalist and was unlikely to switch languages. However, he did call on Microsoft to take more steps to bring Java programmers into Visual Studio.NET.
ComputerWire, 01 Oct 2002

Red Hat Linux 8.0 gets mixed reception

Red Hat Inc has announced the release of its latest attack on the desktop operating system market, but the release of Red Hat Linux 8.0 has proved controversial not only for its potential to impact Microsoft Corp's desktop dominance. Included in Raleigh, North Carolina-based Red Hat's latest version of the desktop operating system is Bluecurve, a new desktop graphical user interface that alters the configuration of existing the existing KDE and Gnome Linux desktop environments. Bluecurve is designed to offer familiar look-and-feel to users of both KDE and Gnome, but has angered some users of both open source desktop environments. It has also led to the resignation of at least one Red Hat employee. In a recent posting to the KDE developers' mailing list, developer Bernhard Rosenkraenzer announced his resignation from the company, commenting that "I don't want to work on crippling KDE, and they don't want an employee who admits RH 8.0's KDE is crippleware." In many ways, configuring the available desktops to its own design makes a lot of sense for Red Hat. A member of the company's desktop team, Owen Taylor, recently published details of the new Linux desktop, outlining the company's reasons for taking this step. "Creating two sets of configuration tools, two web sites, and two boxes isn't feasible or desirable," Taylor said. "So we have to make the desktop fit in with the rest of the product instead of making the rest of the product fit in with the desktop." The problem Red Hat faces is that both Gnome and KDE are mature products that have gathered a great deal of community developer and user support. Neither seems likely to win out over the other, and both are compatible with existing Linux standards. Supporting both is a time and cost-consuming process for the company. Red Hat has been able to reduce its level of integration work by adopting similar configurations for the desktop environments, said Taylor. That hasn't stopped the company's development work angering the community at large, particularly given Red Hat's historical preference for Gnome, rather than KDE. Bluecurve has been criticized in some quarters as a miss-configuration of KDE. Taylor denies that this is the case. "One thing we are definitely not doing is intentionally misconfiguring KDE to make it look bad," he said. "There is no point in shipping crippled software; all that could possibly do is give users a bad impression of Red Hat Linux." Taylor also denies that the company is stripping out features of KDE and Gnome with Bluecurve, stating the features are still available, but are not part of the default configuration. The release of Red Hat Linux 8.0 signals a fresh attempt by the company to make an impact in the desktop operating system market. A fully configured, easy-to-use desktop environment is an important part of the company's ability to take on Microsoft in this space. If Linux is to make an impact in the desktop market, it needs to lose its techie image. Designed for personal and small business usage, Red Hat Linux 8.0 also includes the OpenOffice.org office software suite, the Apache web server, Samba file and print services and firewall software. Available now at retail locations, and from Red Hat, the product comes in two versions. The Personal version starts at $39.95, which includes includes 30 days of Red Hat Network basic service and web-based support. The Professional version starts at $149.95 and adds a system administrator's CD, an office and multimedia applications CD, 60 days of Red Hat Network basic service and 60 days of web-based and telephone support. © Computerwire OSNews review of Red Hat 8.0
ComputerWire, 01 Oct 2002
DVD it in many colours

SAP ushers SuSE into World League

The Register has a new Enterprise Storage ChannelThe Register has a new Enterprise Storage Channel SAP has signed up SuSE Linux, the German-based Linux distro firm, as a global technology partner for MySAP.com deployments. SuSE is pleased as punch with the endorsement - this is the first Linux distro to gain SAP Global Technology Partner status. With SAP's backing, SuSE could present a more credible challenge to Red Hat, the dominant Linux distro in the enterprise. Corporate customers will also be more confident in installing mySAP.com apps on Linux on x86 clusters, maybe saving big bucks on big tin RISC server costs. SuSE already has the technology in place: last year SuSE Linux Enterprise Server for the 32-bit x86 hardware was accredited the very casual-sounding "generally available for mySAP.com" status. And since May this year, mySAP.com has been certified on SuSE Linux Enterprise Server for IBM eServer zSeries. SuSE is working with SAP to reseat mySAP.com onto Itanium, Intel's 64-bit processor platform. ®
Drew Cullen, 01 Oct 2002

BT sets up consumer mobile phone biz (again)

BT has confirmed that it is once again to provide a mobile phone service to consumers, less than a year after flogging its mobile division. Called Mobile Sense, individual accounts will be managed and run entirely from BT's Web site, bt.com. By accessing the site punters can decide which of 12 handsets they want and chose a package based on their predicted usage. They'll also be able to keep tabs on their accounts, you know, that kind of thing. BT reckons the new service could generate revenues of £44m by 2005. It is buying the airtime from former mobile subsidiary mm02. BT Retail chief exec Pierre Danon said in a statement: "Mobile Sense is not a mass-market offering, but it does represent a practical and low-risk first step into the consumer mobile space. "With this product, we have a unique proposition that gives our online customers complete control over what they are charged, even allowing them to change their pricing packages every month if necessary." In April BT announced a similar deal to offer mobile services to its corporate customers. Like the consumer offering, BT buys airtime from its former mobile division, O2, to provide services for its corporate customers under the BT brand. The business-focused service is expected to deliver around £150 million a year in additional revenues by the end of 2005. ®
Tim Richardson, 01 Oct 2002

“I am not Henry Raddick” – HRH Prince of Wales

A spokesman for Prince Charles has scotched rumours that the heir to the British throne is the real author of a series of mysterious Amazon.com reader reviews that have perplexed and entertained Internet users over the past eighteen months. After the leak of two letters from the Prince to the Lord Chancellor, Charles' devotion to letter writing has been the source of much comment in the British press in the past week. Charles, we now know, has been writing to government ministers and civil servants for many years, and also sends long, sometimes sympathetic and often admonishing letters to his subjects too. But the similarities between the Prince and a mysterious "Henry Raddick" have given birth to one of the oddest net.rumours of recent times: that the Amazon.com reviewer is Charles himself. Raddick's confessional reviews - apparently tolerated by the internet giant, making him a cult figure with Internet surfers and one of Amazon.com's most popular contributors - have reached the pages of the New York Times, and found a cult following in the blog community thanks to link concentrators such as Plastic and MetaFilter. Recently a man with a British accent claiming to be Henry Raddick was interviewed for National Public Radio. And the similarities between the two are striking. Both Henry Raddick and Prince Charles are middle-aged men with two teenage children, both are prone to melancholy and late night introspection and displaying stoicism in the face of disappointment. With Charles, it's been the lifelong wait to accede to the throne. With Raddick, it's been the crushing disappointment of acquiring a pug under the misconception that it would grow into a full-sized bulldog. We rang Highgrove to find out if Charles - famous for his Goonish sense of humour - had added to his epistolatory cannon in the form of these delightful reviews. Is Prince Charles the real Henry Raddick? "That's absolute resolute rubbish," a spokesman for Prince Charles' office told us this morning. But the similarities are uncanny, are they not? "It's still absolute rubbish." We were going to ask how he was so sure, and marvel at how familiar he was with the Raddick oeuvre, but heck, an official denial is good enough for us, and we only print the truth. But that might not be enough to dampen the Internet rumour mongers. Strange coincidences may yet provide fuel for future historians of the Royal Family, particularly when they discover that "Raddick" reviewed a book authored by Prince Charles' father, HRH the Duke of Edinburgh, only eighteen months ago. In his summary of Competition Carriage Driving by the Duke, "Raddick" shone some warmth on a relationship that has often been characterized as cold and distant: "At a crowded end of the market, this book shines out. The author offers words of wisdom to those who see competitive carriage driving as a 'way out of the ghetto' and talks candidly about allegations of banned substances and in-breeding which have dogged the sport," noted Raddick tersely, but generously. As moving words as we shall ever read. Register readers can draw their own conclusions. ® Related Links Henry Raddick at Amazon.com Who is Henry Raddick? - Online Journalism Review "Henry Raddick" on NPR I Am Puppy - Hear Me Yap Related Stories Henry Raddick: man of letters - Amazon star [an interview] Lloyd Webber web hoaxer unmasked Amazon.com star Henry Raddick on US public radio
Andrew Orlowski, 01 Oct 2002

Tyan puts dual Xeons on ATX Tiger

Tyan today claimed a first - a dual Xeon mobo, shoehorned into the bog-standard ATX board form factor. The upshot means that system builders and OEMs can build smaller servers, maybe cheaper. The Tyan Tiger i7500 incorporates Intel's E7500 chipset - hence the name. And it's got plenty of server-friendly features - dual independent PCI-X buses and Gigabit and Fast Ethernet LAN ports. The Tiger i7500 comes in two flavours, the S2722GNN, with four standard DIMM sockets for up to 8GB of DDR memory, and the S2722GNN-1U, which has two angled DIMM sockets for up to 4GB of DDR. Tyan's press release announcing the new mobo contains a curiously unfinished quote from Christopher Lopes, veep at Netlist Inc. "By populating the Tiger i7500 S2722GNN with 8GB of Netlist DIMMs, customers can realize over $5,000 in memory cost savings, while blazing past competitive servers in performance." And that's it - no mention of any pricing bundle or technology agreement with Tyan. In this absence we refer you to the following press release from Netlist, pitching the claim that its 1GBM DIMMs are so much cheaper than the competition in a little more detail. The Tiger i7500 is sampling and goes into volume production in October, Tyan says. No prices yet. reg;
Drew Cullen, 01 Oct 2002

Creative sticks with Nvidia for Christmas (probably longer)

In August this year Creative Technology conducted an internal review of the graphics chips players, examining roadmaps, benchmarking the products etc. The outcome, revealed in London yesterday, is that the PC peripherals player is sticking with Nvidia, certainly until Christmas. For "Nvidia has the best silicon for the consumer market today," according to John Moseley, Creative UK marketing manager. And no his company is not publishing the review, he says. Moseley was speaking yesterday at a product briefing held in central London for UK computer hacks. His Nvidia announcement means that Creative's PR agency will no longer have to field calls by the dozen concerning the firm's graphics card strategy. For the time being. The hacks' questions were of course prompted by Creative's relatively new status as the owner of 3DLabs. 3DLabs is a specialist in graphics chips for workstations - by default. It couldn't stand the heat in the consumer kitchen and fled. But it is returning to the consumer space this year with the P10. The design effort exhausted the company, surviving today thanks to white knight Creative, already a major shareholder, and owner of the company since May this year. The P10 was supposed to start shipping in Q3, but that deadline ended yesterday. So there's little surprise here that Creative is sticking with tried-and-tested Nvidia. Next question: so when will Creative move to 3DLabs for consumer boards? The company likes its new sub's roadmaps, and Moseley points out that a couple of years is a long time in graphics chips. But move, assuredly, Creative will. It does not like being one of 100 or more retail board outlets for Nvidia graphics chips; it does not like the fact that Nvidia's major OEMs take precedence, in its opinion, over the major retail board makers. By introducing Creative-branded 3DLabs boards into the retail line-up (sometime next year is a reasonable inference) it will at very least command more attention from Nvidia. Oh, some readers have pointed out that Creative is not quite 100 per cent Nvidia today, pointing to the Creative 3D Blaster 5 RX9700 Pro. This ATI board is on sale only in Asia. ® ® Related story 3DLabs claims 'breakthrough' graphics chip
Drew Cullen, 01 Oct 2002

AOL, Freeserve, Tiscali stick with unmetered Net access

The UK's leading ISPs all say they have no plans to follow BTopenworld's lead and cap their 24/7 flat-rate dial-up services. Yesterday's move by BTopenworld to introduce a 150-hour a month cap on its unmetered AnyTime service is a major change for ISPs to consider. No doubt the industry will watch with interest to see if there is any fall-out from BTopenworld's decision to introduce an ever-decreasing limit for Net users. But right now the UK's major ISPs are happy to stand by and watch. Representatives of AOL UK, Freeserve and Tiscali all said their outfits had no plans to introduce such strict limits. And doubts were expressed over the ability of BTopenworld improve its financial standing through the introduction of user caps. After all, that is one of the major reasons why the ISP - affectionately called BTopenwound because it leaks so much money - introduced this cash-saving initiative. The early response from BTopenworld customers is that they feel they've been ripped off. One told The Register: "It's only a matter of time before they introduce more restrictions." ® Related Stories BTo confirms plan to ditch unmetered Anytime service BTo to ditch 24/7 unmetered service - sources
Tim Richardson, 01 Oct 2002

Danger Inc snags all-you-can-eat deal for Hiptop debut

T-Mobile's Sidekick communicator rolls into US stores today, and Danger Inc has ensured the device will make a splash by snaring the first all-you-can-eat data deal on this side of the Atlantic. Tariffs for the new generation of always-on, packet data digital services from cellphone companies, described as 2.5G (or if you're Sprint "3G") have carried a penalty for users. The first Megabyte of data is typically free, but additional use is charged by the byte. The Sidekick - which is how T-Mobile will market the HipTop - will carry a $39.95 a month flat fee for data usage. This model is GPRS, but Danger has a 1x CDMA version in the pipeline - and will launch a color model into the European market in the near future. Danger Inc. was founded by Andy Rubin, formerly of WebTV, and drew much attention when Steve Wozniak joined the board late last year. Although it's a tiny start-up in comparison to Nokia and Microsoft, it employs a galaxy of talent from across the Valley, most notably from former Be Inc. engineers. It has chosen a thin client model - your data lives in the cloud, and the applications are Java - and it resolutely targets consumers in the yoof market, rather than enterprises with its $199 Hiptop. Er, Sidekick. And on this count, Danger has achieved a minor miracle. The Sidekick is fast, fun and ridiculously easy to use. It does exactly what it says on the tin - providing a phone, AOL Instant Messenger (but no SMS) and a surprisingly-rich email client with the usual bundle of games and PIM apps. The browser is a bonus - there's no JavaScript support, but pages render cleanly and quickly on the monochrome screen. Most of all however, it sets a standard for usability from which rivals could learn much. We'll provide a fuller review when we've spent more time with the device. Our first impressions with Sidekick/Hiptop are that it's a joy to use, and could be the breakthrough product for creating a market that's already proven in the rest of the world, but has yet to materialize in the US (thanks to the oxen self-interest of Qualcomm and its carriers, and the regulatory agencies, in failing to ensure there are universal, interoperable standards): the teen texters. If you're impatient, Henry Norr has an excellent and fair summary here of his HipTop experience, and Henry concludes - as you might too - that for a consumer device its sheer utility, and the quality of the execution, make it highly desirable even if you're outside the target demographic. Because there really isn't a comparable device in the price range. The only real competitor is the Handspring Treo, but it's more expensive and commensurately more flexible - it's an open platform - than the HipTop. And open vs closed is something we'll hear more of, as the HipTop succeeds. Writing HipTop apps shouldn't be difficult, as they're pure Java. Rubin describes his role as providing a menu to carriers who can pick and choose which apps they include in the bundle. If there's a weakness in Danger's model, this is it - and not because of anything Danger has done, or could do, but because of its customers. Danger's customers are the carriers - who are at best conservative, and at worst congenitally stupid, and have often shown the same scalping mentality and cronyism as you'd find with small town gangsters. We're already chaffing at not having a "BOFH" option from T-Mobile, that could give us SSH and a C shell, and maybe Arkanoid too.. (Remember, it was the inclusion of Telnet in the first 1996-vintage Nokia 9000 that saved the platform, and sysadmins created a niche market for the product that was substantial enough to persuade Nokia to continue its development). Choice is good, but do the carriers know this? For now, though, with its focus on utility and ease of use, Danger is shining a light on a miserable Stateside tech economy (that's just about to get a whole lot worse, we fear, once the shooting starts), and that's enough cause for celebration. ® Related Stories Steve Wozniak's smartphone adventure Buy? Sell! - Alsop sticks it to Danger. We stick it to Alsop
Andrew Orlowski, 01 Oct 2002

Freeserve network probs in Midlands and South

Freeserve is busy trying to restore service to customers in the Midlands and the South of England after customers reported that accessing Web sites was proving impossible. The problem, which affects users of its AnyTime dial-up service, means that many Web pages load slowly - if at all. Customers have told The Register that the problems began on Saturday morning. Others say it is more widespread that Freeserve is admitting, with users in Manchester and Leeds also reporting problems accessing the Net. An announcement on Freeserve's Web site admits that the UK's biggest ISP is "currently experiencing data transfer problems with the AnyTime package" and "is working to resolve this issue". A spokeswoman for Freeserve told us: "We are aware of some isolated incidences of connectivity problems in the Midlands and South East and our network suppliers are working to rectify the situation for affected users." ®
Tim Richardson, 01 Oct 2002

Mobile phone Java risks ‘minimal’

Is wireless Java at risk from malicious code attack? The answer appears to be no - for vanilla Java 2 Micro Edition (Java 2 ME). But vendors' proprietary extensions are more problematic, according to Markus Schmall, of T-Mobile. He recently conducted a study of the security of Java 2 ME, using tests on a Siemens SL45 phone. Java 2 ME is defined so that cross-loader functions are limited, maths functions are restricted and no file access is possible. This greatly limits the scope and number of attacks possible on mobile devices running Java 2 ME. Schmall considered a number of actions which malicious code might take: accessing storage media, accessing internal memory, initiating Web connections and interfering with installed applications. Although it was possible to freeze mobile phones with maliciously-constructed SMS messages and the like, its not possible for malicious code to replicate. Header manipulation vulnerabilities that lead to 'freezer' SMS exploits are due to problems with mobile phone firmware - not Java, Schmall concludes. The few Java viruses extant (Strange Brew and Bean Hive) do not pose any risk for Java phones, he found. Strange Brew fails to replicate consistently, even on a PC, and Bean Hive relies on class loader functionality not used in Java 2 ME. Schmall's tests apply only to Siemens phones. Separate work is needed to verify other mobile phones are secure. "J2ME on its own is relatively secure but you have to take care of proprietary extensions," he says. So there's no particular cause for concern just yet, but Schmall said that the evolution of Java to MIDP 2.0 (which brings access to file systems and phone books to mobile Java) posing possible risks for the security of Blackberry devices. ®
John Leyden, 01 Oct 2002

Porn diallers and Trojans – the new face of malicious code

The profile of malicious code on the Internet is changing with porn diallers and Trojan horses becoming more serious problems. A study on the malicious code blocked last year by managed services firm MessageLabs finds the spread of Trojan horses is becoming more organised. From recording Trojans sporadically, MessageLabs is now intercepting 40-50 Trojans at a time. These are systematic attempts to infect victim's machines, it says. Sex diallers are another growing problem. These change the number used by dial up connections to expensive premium rate lines. Such programs, which are regularly modified by their creators and pose legal problems for AV firms. By adding detection for the programs (which don't load without user interaction) the vendors could be accused of restraining trade. One man's malware is another's useful utility. MessageLabs gets around this by filtering (quarantining) such traffic rather than deleting it. During 2001, the vast majority of viruses blocked by MessageLabs were Word macro viruses but few achieved any prominence. Built by s'kiddies and using the same infection techniques time and again most AV tools with heuristic functions easily detect such nasties, said Alex Shipp, a senior anti-virus technician at MessageLabs. Word macros also pose a lesser risk because Office 2000 requires macro code to be signed, so that the spread of such viruses is becoming more constrained, he added. Shipp is slightly more concerned about the risks posed by script viruses which, as the Kournikova worm proved, can spread rapidly if rates of infection reach a critical mass. Again, virus writers are failing to exercise much "imagination", and this is keeping the problem under control, MessageLabs believes. Windows executable viruses form the main danger going forward, as there are so many ways to write them, Shipp said in a presentation at the Virus Bulletin conference in New Orleans last week. Add in the increased use of exploits used by spammers in virus writing and the war between VXers and AV vendors is moving to a new battleground, he says. ®
John Leyden, 01 Oct 2002

America's National Cybersecurity Strategy

Yesterday the White House released its long-awaited "National Strategy To Secure Cyberspace." This high-level blueprint document (black/white or color), in-development for over a year by Richard Clarke's Cybersecurity team, is the latest US government plan to address the many issues associated with the Information Age. The Strategy was released by the President's Critical Infrastucture Protection Board (PCIPB), an Oval Office entity that brings together various Agency and Department heads to discuss critical infrastructure protection. Within the PCIPB is the National Security Telecommunications Advisory Council (NSTAC), a Presidentially-sponsored coffee klatch comprised of CEOs that provide industry-based analysis and recommendations on policy and technical issues related to information technologies. There is also the National Infrastructure Advisory Council (NIAC) - another Presidentially-sponsored klatch - allegedly consisting of private-sector 'experts' on computer security; but in reality consists of nothing more than additional corporate leaders, few if any considered an 'expert' on computer security matters. Thus, a good portion of this Presidential Board chartered to provide security advice to the President consists of nothing more than executives and civic leaders likely picked for their Presidential loyalty and/or visibility in the marketplace, not their ability to understand technology in anything other than a purely business sense. Stacking the deck with friendly faces (and thus receiving anything but objective advice) is not new to the President, who recently stacked his Scientific Advisory Council with those supporting his policy agendas. Factor in Richard Clarke's team – many of whom, including Clarke, are not technologists but career politicans and thinktank analysts – and you've got the government's best effort at providing advice to the President on information security, such as it is. (One well-known security expert I spoke with raised the question about creating a conflict of interest for people who sell to the government or stand to gain materially from policy decisions to act in advisory roles, something that occured during the Bush Administration's secret energy meetings.) Now that you know where the Strategy comes from, and where the real interests lie behind its creators, let's examine some of its more noteworthy components. Although the Administration heralds this as the first "National Strategy" for cyberspace security, we need only reflect on the Clinton Administration's "National Plan for Information Systems Protection" from 2000, and the President's Commission on Critical Infrastructure Protection Report from 1996 - like its predecessors - and despite the publicity push from the Administration - nearly all of what's in this Strategy isn't new, either in what it says or what it fails to say. In keeping with tradition, the Strategy "addresses" various security "issues" instead of directing the "resolution" of security "problems" – tiptoeing around the problems instead of dealing with them head-on and demanding results. At times, the Strategy reads like the fear-mongering propaganda published by assorted industry groups and security product vendors. It claims that 70% of cyber-attacks on corporations are caused by insiders, yet provides no source for these statistics. Further, during its discussion of the threats and vulnerabilities, there's an eye-catching sidebar with a hypothetical worst-case cyberterrorism scenario conjured up by "50 scientists, computer experts, and former intelligence officers" – and throughout the report are statements that the Administration consulted with experts across the country in a variety of industries. Yet there's no reference listing who these 'experts' are, or what their credentials are to enable them to make such prophecies and participate in the preparation of this Strategy, something that undermines the credibility of these statistics and statements For all we know, these 'experts' are career politicians, academics, or clueless CEOs – many of whom probably never served in an operational IT capacity before -- and thus don't understand the reality of today's information environment. To its credit, the Strategy provides (yet another) list of suggested 'best practices' and proposals to improve technology security in a variety of venues, from homes and small business to government and large enterprises. It uses simple, easy-to-read language and presents its contents in vibrant color with lots of white space and eye-catching sidebars and high-tech graphic motifs, very much like a vendor's Powerpoint presentation for prospective customers.. In the areas of corporate security improvements, the Strategy indeed shines, as it recommends Board-level accountability for information security, proper security administration, and better integration and alignment of information security with senior management and business goals. This is perhaps the best component of the Strategy, and actually provides innovative guidance that can be implemented fairly easy by corporations. The Strategy makes it clear that it is to serve not as a "Federal government prescription" but as a "participatory process" to develop America's national information security environment with the private sector, and believes that a hands-off policy is the correct way to work with them. Indeed, for technology's private sector, this is a good thing given the speed that government operates. Unfortunately, for the federal government, what is currently needed is not a prescription but a mandate on what must be done (and by when) to improve federal information security, not another list of things that "should" be done but most likely won't. In this regard, the Strategy is no different than other government cyber-strategy documents (mentioned earlier) and audit reports (from GAO or OMB) published over the years espousing the need for better systems security and what "should" be done to improve it. For the private sector to take the government seriously in this area, government needs to police itself first before coordinating the efforts of industry. As expected, the Strategy gives a tiny nod to developing a separate government-only network, otherwise known as GovNET. While sounding good on paper - and been Clarke's vision for years - leading security professionals question the logic of such a network. Given that the Internet is redundant with multiple – if not infinite – numbers of pathways between nodes, one wonders why Clarke & Co. are considering moving large chunks of the government to a network with a finite series of nodes, and multiple single points of failure or attack – thus consolidating all his eggs into one basket just waiting to be dropped? (Earlier this year, Clarke acknowledged that GovNET would still have its share of viruses, trojans, and worms, so one has to further wonder about this proposal, since it's apparently not going to be any more secure or robust as what he's got now.) According to the Strategy, vendors and possibly security consultants may be required to obtain government or industry-based certifications to prove their competency. Again, this sounds good on paper, but some argue this requirement could be skewed to favor large, established companies (or products) and thus alienate small firms, consultants, or alternative technologies from the 'certified' mainstream security or technology industry. Further, the Administration fails to note that a certification (or a college degree in cyber-security, another of its proposals) does not make a person any more competent a professional; rather it takes years of applied experience to be considered an 'expert' and 'competent' in one's field. Contrary to the profiteering interests of certification and testing organizations, we forget that nearly anyone can pass a test; what matters is how they perform in the workplace, not in the classroom. Regarding technology products, the Strategy discusses employing programmers who understand security to code better products, yet makes no mention about the executives in marketing and corporate leadership wanting to bundle features together to make a product 'convienient' for marketing purposes and thus likely more exploitable. Certainly, we need programmers to understand software and system-level security, but programmers are only one small part of the problem (a very small one in the grand scheme of the software industry) and act at the direction of the higher-ups in the company. Executives must realize the dangers of – and work to reduce or eliminate – 'feature-creep' in their products that leads to exploitation. Just consider how much 'more secure' your information would be, and how much less spam you'd receive had Microsoft not integrated Internet Explorer and Visual Basic Scripting into Windows. The Strategy notes that "systems often become overloaded or fail because a component has gone bad" and proposes that "trustworthy computing" be part of a national priority. Not surprisingly, this is the same term used by Microsoft to describe its multi-faceted approach to securing future versions of Windows. Conspiracy theories about this will abound, particularly given the close ties Redmond has with the White House. Industry analysts will also watch to see how quickly Hollywood's cartels leap to position their copy control initiatives as part of "trustworthy computing" to ensure their profit streams, and link their revenue protection to computer security features. It's interesting that - perhaps as a result of industry lobbying (or the Administration's ignorance) - the Strategy has no concern over the current 'monoculture' environment for operating systems, chosing instead to support the development of new security products, technologies, and services to be built around (or over) the current (and heavily-flawed) 'foundation' for most of America's critical systems. The Strategy must consider such preventable (but recurring) problems as the price of doing business in the Information Age, something that many believe is foolhardy and complacent thinking. Then again, effectively securing the foundation of our systems – the operating systems – would mean fewer security products and services need to be purchased from third parties ... perhaps this oversight in the Strategy is tribute to the lobbying efforts of security vendors trying to preserve their revenue streams? A national strategy is certainly necessary to effectively deal with the many problems of computer security. While there are indeed well-conceived portions of the Strategy that will lead to procedural improvements in America's information security posture if implemented, the Strategy falls far short of what it was heralded as by the Administration, and were the subject of this article. The release of the National Strategy To Secure Cyberspace is yet another Oval Office attempt to gain consensus in dealing with the many problems associated with effective information security in the United States. Unfortunately, in the areas most responsible for the dismal current state of information security, the Strategy fails to recognize and deal with them at all. If the administration spent one-tenth the time or money on actual security implementation and education (thus leading to long-term solutions) that it does on convening boards of advisors, councils, town hall meetings, and issuing vaguely-worded, broadly-encompassed, slickly-packaged "feel good" reports like this one, there wouldn't be such a large computer security problem needing to be remedied in the first place. Maybe I should start my own Coffee Klatch. © InfoWarrior.org, all rights reserved.
Richard Forno, 01 Oct 2002

BTo to cap ADSL?

The Register has received a number of emails from worried customers of BTopenworld asking if the ISP's decision to cap its unmetered dial-up service could have an impact on its broadband service. It seems they're concerned that if BTopenworld caps its dial-up service, what is stopping it from capping its broadband service? The answer, of course, is nothing. Let's look at the state of play. BT Broadband - the new no-frills service launched by BT Retail - has a daily download limit of 1GB and the monster telco says it may introduce "further charges in the future for customers whose use exceeds this figure". Earlier this month BTopenworld set out letters to its broadband satellite users warning them that it would impose restrictions if people used the service too much. And now BTopenworld has capped the use of its supposedly unmetered dial-up service. Insiders have already told The Register that BTopenworld intends to start restricting its ADSL service for its heaviest users. Anyone spot a pattern developing here? ® Related Stories BTo confirms plan to ditch unmetered Anytime service BTo gets tough with Sat bandwidth hogs
Tim Richardson, 01 Oct 2002

Crucial starts flogging graphics cards in Europe

Crucial Technology is to start selling two ATI graphics cards through its European web site with immediate effect (Crucial US has been selling graphics cards for a wee while - of course). Two versions are available. First up is a performance -game card - the Crucial Radeon 9700 Pro, retailing at 237.59 of our "British pounds", as the company quaintly dubs our currency. Next in line is the Crucial Radeon 8500LE, pitched at the old PC upgrade market. This retails for £71.09. ATI has an exclusive retail relationship in Europe with Guillemot, but we guess this is bricks and mortar only, seeing as how Crucial is the online retail arm of memory maker Micron. The only surprise with today's announcement is just how long Crucial has taken to branch out into selling different components. ®
Drew Cullen, 01 Oct 2002

BTo emails service cap sob story

BTopenworld has begun sending out bleeding heart emails to its punters informing them that its is capping its flat-fee AnyTime package. It seems some naughty people use the service too much and this isn't "fair", says BTo. "We're making some changes to your service in order to maintain the quality and reliability of Internet connection for all our users," it says. "We're changing our Terms & Conditions and introducing a monthly usage quota of 150 hours' Internet access. This works out at an average of five hours a day, every day. Based on your recent usage this is a lot more than you're likely to need," it reasons. The reason it's doing this, explains BTopenworld, is because there are some customers who keep their computers connected to the Internet for long periods of time. "This affects the level of service everyone else receives across your network, and isn't fair on you and others. "Levelling the playing field with a fair and reasonable usage level for all will help us give everyone a reliable quality of service." Bless. ® Related Story BTo confirms plan to ditch unmetered Anytime service
Tim Richardson, 01 Oct 2002

On AMD Athlon XP 2800+ 333 FSB, nForce2 and VIA

Nvidia has confirmed that the nForce 2 graphics chipset is in production, timing the announcement to coincide with today's big CPU launch, the AMD Athlon XP 2800+ with the 333 frontside bus (FSB). A data throughput improvement of "up to 25 per cent" is promised by AMD for its new CPU. nForce 2-powered mobos should be heading their way in to retail by the end of this month. VIA today also issued a press release pointing out that it already has two chipsets, the VIA Apollo KT400 and VIA Apollo KT333, which support AMD's 333 FSB. So it can provide an upgrade path for gamers who want to install a new 333FSB Athlon - so long as they have the right board of course. But today's chipset PR thunder belongs to Nvidia. After all, VIA makes so many chipsets and manages to launch most of them when when it says it will. Unlike Nvidia which is several months late with nForce 2. nForce 2 mobos will mesh performance GeForce graphics chips with performance Athlon XPs, to produce the AMD gaming experience du jour. Nvidia is keen to see nForce2-based mobos paired up with fast DDR memory. Inhouse, the company is running compatibility tests with DDR400 memory and it has also teamed up with Advanced Validation Labs and Computer Memory Test Labs to run nForce 2compatibility tests for the "widest possible selection of DDR memories, including DDR333". Taiwan's mobo makers and US system builders are queuing up to flog nForce 2. You want a list of partners with nForce samples? Here it is: Abit, ABS, Alienware, Aopen, ASUSTeK, Atlas Micro, Chaintech, Epox, Falcon Northwest, HyperSonic, Leadtek, MSI, Shuttle Computer, Soltech, Totally Awesome, and Vicious PC. And just how fast is the nForce 2/2800+ 333MHz FSB? Here's a friendly quote from Kelt Reeves, president of Falcon Northwest. "Based on our analysis, it's delivering better performance with DDR333 than equivalent 1066 RDRAM based systems - and at a considerable cost savings." We'll wait for the reviews of production systems. These will be out late November time, when the first Athlon XP 2800+ and 2700+ 333MHz FSB PCs roll off the assembly lines. AMD is taking an interesting approach with the launch, opting for Limited edition desktop systems featuring the AMD Athlon XP processor 2800+ targeted at PC enthusiasts and gamers ... exclusively from premier enthusiast PC manufacturers ABS, Alienware, Falcon Northwest, MicronPC and Voodoo PC." In other words top dollar for some faithful North American system builders while silicon is in short supply. The AMD Athlon XP processors 2800+ and 2700+ will cost $397 and $349 each in 1,000-unit quantities each. ® A gaggle of press releases nForce goes into production And it's really groovy with the new Athlon XPs VIA's groovy too AMD gamer/enthusiast launch AMD XP 2800+ Reviews/previews Tom's Hardware Ace's Hardware Extremetech HardOCP Anandtech - nForce 2 v. VIA Anandtech - AMD XP 2800+
Drew Cullen, 01 Oct 2002