Sun Microsystems Inc has released its latest Java 2 Standard Edition (J2SE) specification, finally introducing full support for MIT's Kerberos to the Java framework for secure single sign-on to multiple applications and web services, writes Gavin Clarke. Palo Alto, California-based Sun has also introduced architectural changes designed to tackle lingering scalability and performance issues that dogged J2SE 1.3. Sun told Computerwire J2SE 1.4 will "break records" but was unable to provide figures. Senior Java product manager Sherman Dickman said: "We are going to break some performance records, and provide much more complete security for the Java platform." Dickman promised performance figures with J2SE's formal launch next month - the company this week published its final release candidate for review. Dickman distanced Sun from rival Microsoft's own controversial implementation of Kerberos in Windows 2000, which critics attacked for its use of proprietary extensions. He said J2SE will use the "industry standard" implementation of the Massachusetts Institute of Technology's Kerberos strong authentication protocol. Kerberos has come into the spotlight in recent months thanks to the growth of web services. Redmond, Washington-based Microsoft last year promised its Passport online identification system would use Kerberos for secure access to .NET web services. J2SE 1.3 provides some Kerberos support but Sun says that Kerberos will be fully supported in the core 1.4 specification. Dickman cited increased demand from developers for Kerberos, who are making greater use of J2SE and Java 2 Enterprise Edition (J2EE) - which sits on-top of J2SE - in web services. Other security enhancements include Java Secure Socket and Java Cryptography extensions, Java Authentication and Authorization Service and Certificate Path API. Dickman also revealed changes to the J2SE architecture designed to attract enterprise users, which he claimed made J2SE 1.4 twice as fast as J2SE 1.3. These include a new I/O package that speeds up application performance by replacing individual threads with simultaneous connections to the server. This means a server can support a greater number of applications using fewer channels, increasing performance and J2SE scalability. Other changes include mapped memory files to let Java access native memory - a technique used by applications written in C+ and C++. J2SE has also been engineered to work on 64-bit Solaris on the Sparc platform. Support for 64-bit increases the amount of data that can be held in the memory and avoids the need to save to hard disk - speeding up data retrieval time. © Computerwire.com. All rights reserved.
IBM Corp wants to catch the wave of server consolidation for print, file, and Web serving by providing Linux-only configurations for its zSeries mainframes and iSeries midrange servers that carry a lower price tag than plain vanilla zSeries or iSeries machines, Timothy Prickett Morgan writes. IBM will today provide a preview of two upcoming server announcements it will make at the LinuxWorld show in New York next week and at its annual PartnerWorld show in San Francisco next month. Calling the new machines Linux-only is a bit of a stretch, of course, since the zSeries "Raptor" mainframes and the iSeries Model 820 servers will have z/VM and OS/400 installed on them (respectively) to act as partition managers. According to Peter McCaffrey, director of zSeries product marketing at IBM, the new machines are based on feedback and market success that IBM has had last year in promoting the mainframe as a platform on which enterprise customers can consolidate their Unix, Linux, and Windows workloads. "Customers have really embraced Linux on the mainframe to help drive down costs." The new Raptor server, which will reportedly have from one to four of a tweaked version of IBM's G7 mainframe processors, will debut initially as a Linux solution; after the machine is shipping sometime around the middle of March, IBM will announce Raptor servers that will support its z/OS operating system (and maybe others). But these machines will only officially run Linux and will not be enabled to run other operating systems. The reason why this is the case is that the Linux-based Raptors will have lower prices. McCaffrey says that a uniprocessor Raptor server equipped with a reasonable amount of memory and disk, the z/VM license (only good as a Linux partition manager) plus three years of maintenance and software services will sell for around $400,000. A uniprocessor "Freeway" zSeries 900 mainframe with the same features sells for around $750,000. The Raptor will be able to support hundreds of virtual Linux servers, he says, but the economics start to make sense once a customer puts 20 virtual Linux servers on a uniprocessor Raptor. He also adds that with the workload manager built into z/VM, companies should be able to get anywhere from 80% to 95% sustained CPU utilization on the Raptor running Linux, compared to a 10% to 20% CPU utilization on individual Intel-based Linux servers. Any economic analysis has to take this into account. The iSeries Model 820 configured for Linux comes in single-, dual- and quad-processor versions, and these machines support three, seven or fifteen Linux partitions. (OS/400, unlike z/VM, does not allow hundreds of logical partitions on a single physical machine, but rather four partitions per processor. One partition on each server has to be dedicated to running OS/400, which eats one-fourth, one-eighth, and one-sixteenth of maximum number of partitions on a single Model 820 server, respectively.) Sources at IBM say that pricing for these iSeries for Linux servers has not been finalized, but that customers should expect a 15% to 20% discount compared to the cost of buying a regular Model 820 server using the same hardware, plus whatever normal discounts they can negotiate off list price. Both the zSeries and iSeries Linux servers can be configured with commercial Linux distributions from SuSE and Turbolinux; Red Hat is expected to have support for these two boxes, and indeed the entire eServer line, in short order, perhaps by the time these two Linux-only servers ship in sometime in March. IBM says that 11% of the mainframe processing power that was shipped in the fourth quarter of 2001 were dedicated to supporting Linux workloads. The impression that one gets from IBM is that if Linux had not been available, mainframe revenues would have declined. IBM clearly believes that the trend to consolidate servers and to use Linux for print, file, Web or email serving could unite on less-costly, Linux-enabled zSeries and iSeries servers and thereby increase their sales. What customers will believe could be, of course, another story. A lot will depend on particular situations. Customers with existing zSeries and iSeries servers supporting their core ERP applications will be inclined to listen to IBM's sales pitch and maybe even run some numbers to see if they could save some money by consolidating Unix, Linux, or Windows applications onto Linux partitions on these zSeries or iSeries servers. It is hard to imagine IBM having the same success with die-hard Unix shops or Wintel shops, however. They have no experience with mainframe or OS/400 servers, and because people costs are such a big component of data center costs, probably have no desire to start building up that experience. © Computerwire.com. All rights reserved.
Western Digital, the hard drive maker, has withdrawn plans to raise more money through the issue of new shares, because it thinks it can get by without it. The company points to 'improved fundamentals' in its business; in other words, it is generating enough cash to fund its needs - it's made a profit five quarters in a row now from its hard drive business - and it reckons that this will continue. WD could have raised up to $167.7m from registered but unsold shares. By ensuring these don't enter circulation, WD ensures that a: existing shareholders don't see their stakes diluted, and b: that there is no nasty surprise to the share price at a difficult time for tech stocks. WD yesterday reported Q2 net income of $9.2m before -non-recurring investment gains, and $12m when these are included. The HDD business generated $15.3m in operating income for the quarter ended 28 December, 2001. Revenue was $575m on unit shipments of 7.7m (Q2: 2001$562m on unit shipments of 6.1m, and a net loss before non-recurring items of $7m). WD has completed the purchase of Fujitsu's HDD manufacturing facility in Thailand, bought presumably for a knockdown price. The exit of Fujitsu from the commodity hard drive business will see WD's market share increase, and should translate into less pressure on industry margins.
UpdatedUpdated The W3C has published a revised Patent Policy document which it hopes will help avert a schism over web standards. But it's not a new policy draft. That's under discussion, and recent minutes on the progress of this can be found here Under pressure from IBM, the W3C suggested that royalty-bearing patents should in future be blessed as web standards, under a 'RAND' license. To date all W3C standards have been royalty-free, which, it's universally agreed, has helped accelerate the adoption of web technologies. It's also allowed small developers to work without fear of expensive intellectual property litigation, and allowed free software authors - forbidden from linking to patent-encumbered code by the GPL, to develop open implementations. "The policy of licensing patents under RAND terms and conditions has allowed our best technical individuals to work together without becoming burdened by patent issues," IBMer Gerald Lane wrote last year, in support of royalties. The RAND addition was stalled after the issue blew up at the end of September, with open source developers advocating the formation of an alternative to the W3C, if royalty-bearing licenses became an option. Now, with the input of Bruce Perens and Eben Moglen, a compromise has been reached. This doesn't rule out RAND licenses however. Instead the compromise suggests kicking them into the long grass for 90 days, while a PAG (Patent Advisory Group) is formed. The proposal then needs approval by the working group Advisory Board and the Director. The W3C at this stage notes that it would really rather not have to deal with it, and could RAND-proposers please look elsewhere: "Note that there is neither clear support amongst the Membership for producing RAND specifications nor a process for doing so. Therefore if a PAG makes a recommendation to proceed on RAND terms, Advisory Committee review and Director's decision will be required. It is also possible that a the PAG could recommend that the work be taken to another organization." Bruce Perens told us via email last night:-"Obviously the policy is very pro-RF, so anyone who was trying to lobby for RAND did not succeed. But that doesn't mean they can't go to any other standards body - for example ECMA, with their RAND standards." Perens said he couldn't comment on individual positions adopted during the formulation of the new policy by any Big or Blue members. Head of Communications for the W3C, Janet Daly, added:- "it's not a revision of policy - it's like a backgrounder," she said. "There's now a window into seeing how W3C work is done in relation to issues concerning patents ... you no longer need to an attorney to understand how the W3C is making efforts to be be absolutely clear about a preference for IP work and how to how to resolve questons when RF is not explicitly stated." "We're moving from implicit to explicit modes, and we're moving to explicit in full public view." So in advance of the new Policy Draft, RAND remains on the table, for bidders wishing to risk the flak, which won't please GPL developers one bit. On the other hand, even if a RAND specification reaches the end of the procedural assault course, and gets the directors blessing as a W3C standard, it may be so poisonous that no one (apart from IBM), would want to touch it. The revised PPWG Policy draft, which is RF, is still under discussion. But that will bring some welcome clarification. ® Related Link W3C: Current Patent Practice [the new draft] Related Stories IBM risks billion dollar Linux strategy with W3C RAND demands W3C defends RAND license The free Web's over, as W3C blesses Net patent taxes Tempers cool in Pay to Play Web row
Gateway, is to implement deep cost cuts in its US heartland, announcing 2,250 job cuts and the closure of 19 retail stores. The PC maker is taking a charge of $75-$100m and expects to save $100m annually from the cuts, which represent a 16 per cent reduction in headcount. This follows last year's staff reduction of 25 per cent when overseas operations were shut down. Four out of the company's 10 admin sites are to be chopped. Gateway will still operate 277 stores in the US, after the cuts. The company had a woeful Christmas, losing market share, and failing to meet forecasts. But cost cutting has ensured that it made a - skinny - profit in the December quarter. The company yesterday declared net income of $5.1m before non-recurring items, on sales of $1.1bn (it had forecast sales of $1.25bn). In the same period last year, sales were $2.4bn. ®
London new media agency Oyster Partnership has declined refused to comment on allegations that it is to shed around 20 per cent of its workforce. According to insiders, staff were told of the job cuts - thought to number around 30 staff - earlier today. However, the company, which boasts such high-profile clients as the BBC, BT and Orange, declined to answer our questions concerning the matter. However, in a cryptic statement issued this afternoon the company said: "Oyster has undergone a review of 'the way we work', which was initiated following the appointment of Paul Kingsley as CEO, in November 2001. As a result of this review it has been decided to place a greater emphasis on role and competency. "We will continue to invest and grow our core skill sets, such as business and experience architecture consultancy, and consolidate key support functions, whilst delivering value cost effectively." Although Oyster has remained tight-lipped on this issue, it is keen to point out that is rated seventeenth in the Sunday Times Virgin Atlantic Fast Track 100 listing, which ranks the UK's hundred fastest-growing unquoted companies by sales growth. According to its entry in the list: "Its sales have increased 136 per cent a year from £798,000 in 1997 to £10.5m in 2000, when it had 137 staff. However, with the downturn in the digital media sector, Oyster has had to cut its staff from a peak of 220 to 140 this year." ®
Energis continued to slide this morning following yesterday's revenue warning that led to a freefall in its share price. In a trading statement issued yesterday afternoon the alternative telco said that its turnover and earnings before interest etc (EBITDA) for the financial year are "unlikely to meet consensus expectations". Shares fell 50 per cent on the news. Energis expects turnover for the full year to be down by around 5 per cent on anticipated revenues of £1.014 billion. EBITDA is expected to be around 10 per cent below the current consensus forecast of £155 million. The shock announcement caused panic among investors as its share price plummeted 57 per cent to end the day at 23p. By mid morning today shares had fallen a further 21 per cent (5p) to 18p. The company blames lower than expected revenue growth and increased pressure on its margins for the profit warning. In a bid to resolve the situation Energis plans to cut costs by a further £30 million a year and reduce its capital expenditure. No one from Energis was available to comment on whether any jobs would be lost as a result of this latest belt-tightening exercise. In November Energis said it was axing up to 350 jobs in a bid to cut £20 million a year from its overheads. ® Related Story Energis axes 350 jobs
A hacker's backdoor affects an estimated million Vaio notebook computers, manufacturer Sony warns. The security flaw could let crackers manipulate or delete data on hard disks and is found in proprietary software installed on Vaios sold in Asia, the Middle East and South Africa since May 2001. Vaio models bought in Europe or America are believed to be immune to the problem, which relates to Manual CyberSupport for Vaio Version 3.0 and Version 3.1 that is pre-installed in some models of Vaio computers and the Recovery CD. In a statement , Sony said because of the vulnerability "there is a danger that a third party may find a way to by pass the software's security and access the Vaio through a homepage on the Internet or by email without authorization." Either an "Internet homepage, email containing HTML or a HTML file attached to an Email" containing malicious code might be used to exploit the glitch. "If the Vaio is attacked in this way, it may result in stored data being over written, erased or copied. In this instance, running regular anti-virus software will not protect the affected models of Vaio", Sony warns. Sony has provided a software patch on its site and is notifying Japanese customers, where the majority of affected laptops were purchased, by email. A call centre to field calls on the problem and offer affected customers the software fix on a CD, has also been set up. Sony estimates the software bug will cost it up to $1.49 million to fix. ®
Boffins at Hewlett-Packard are working with academics to develop chips so small they could fit on the head of a pin. HP and UCLA yesterday announced they have received a US patent for technology that could make it possible to build very complex logic chips - simply and inexpensively - at the molecular scale. The collaboration is pursuing molecular electronics as an entirely new technology that could augment silicon-based integrated circuits within the decade and eventually replace them. Silicon technology will reach its physical and economic limits by about 2012, HP believes. The latest patent involves a process for dividing a minute chip into discrete zones, making it possible to build more complex circuits without running into problems from interference. The scientists had previously worked out how to fabricate a grid of nanowires just a few molecules wide. Here's how HP explains the technology: "Today's chip manufacturing process involves multiple, expensive precision steps to create the complex patterns of wires that define the computer circuit. The HP and UCLA invention proposes the use of a simple grid of wires - each wire just a few atoms wide - connected by electronic switches a single molecule thick. "Previously, HP demonstrated in the laboratory how some rare earth metals naturally form themselves into nanoscopic parallel wires when they react chemically with a silicon substrate. Two sets of facing parallel wires, oriented roughly perpendicular to each other, could then be made into a grid, like a map of Manhattan in New York City with streets running east-west and avenues north-south." "In a related experiment, researchers from the collaboration crossed wires the size of those used in today's computer chips and sandwiched them around a one-molecule thick layer of electrically switchable molecules called rotaxanes. Simple logic gates were then created electronically by downloading signals to molecules trapped between the crosswires." While simple logic circuits have been formed in previous experiments, until the most recent patent, interference remained a barrier to creating more complex chips. The solution proposed by the patented invention is to cut the wires into smaller lengths by turning some "intersections" into insulators. Insulators are created by "cutter wires," which are chemically distinct from the others. A voltage difference between the cutter wire and the target wire creates the insulator. The latest patent, issued to Philip Kuekes and R. Stanley Williams of HP Labs and James R. Heath of UCLA, builds on previous patents and scientific work, including a patent for a memory chip based on molecular switches granted in 2000, by the company and university. The work is being funded by a four-year, $12.5 million grant from the U.S. Defense Advanced Research Projects Agency and a $13.2 million investment from HP. ® External links HP, UCLA collaboration receives key molecular electronics patent Related Stories HP moves towards molecular-scale computing Scientists tune in to molecule-sized transistors Chip biz challenged to develop molecular CPUs HP moves towards molecular-scale computing
British ISP Pipex is to invest £2 million in subsidising the installation of ADSL services. Aimed primarily at residential users, the move is designed to kick-start the UK's fledgling broadband market. The money will be used to cover the installation costs for 40,000 new customers who sign up to a DIY self-install ADSL product. The monthly subscription cost of the single user product, PIPEX Xtreme Solo 512Kbps, has also been priced competitively at £29.95 (excluding VAT). David Rickards, MD of PIPEX, told The Register that 2002 is going to be a big year for broadband in the UK and that the money from its Broadband Development Fund is an investment to attract new customers. However, the competitively-priced monthly subscription means that margins are tight. But if Pipex - which has some 1,400 broadband customers at the moment - can swell its customer base over the next six months or so as expected then today's initiative will be looked at closely by other ISPs. Pipex is also spending £3 million on a marketing campaign to get people to sign up to its deal. E-envoy Andrew Pinder has backed Pipex's bold move claiming that this initiative will "continue to bring broadband benefits to more businesses and consumers". To find out more visit www.xtreme.pipex.net. ®
Shares in high-speed network outfit Fibernet collapsed by more than a third this morning after it reported that first half revenues are expected to be down on last year. In a statement the company said that it has "not been immune to the current economic environment and has, in the first quarter, seen sales cycles in its UK business lengthen considerably. "As a result, revenues in the first half-year may be below those achieved in the same period last year," it said. However, it expects full year revenues to grow, although at a rate that is "significantly below market expectation". And it warned that a fall in revenues was likely to have a "negative affect" on the company's operating performance. This grim news sent investors scurrying to their brokers ordering them to sell. By Late morning Fibernet's shares were down 137.5p (37.67 per cent) at 227.5p. Despite the slump Fibernet maintains that its balance sheet remains strong and that the business is fully funded. And it maintains that demand for its wholesale DSL products 'augurs well'. However, this was not enough to convince investors to hold their nerve and sparked a period of panic selling that mirrored the run on Energis' shares yesterday. Fibernet is one of the few remaining telcos to be actively involved in local loop unbundling in the UK. So far only around 150 lines in total have been unbundled. ® Related Stories Energis shares freefall after revenue warning Fibernet offers unbundled DSL from next week
The Windows XP user interface has been described cruelly, and frequently, as the "Teletubbies" UI, but as with much of the rest of WinXP it has a few rough edges which Microsoft will no doubt polish up a tad in some forthcoming service pack. Why wait, though? ISP and design agency Digital Ink has thoughtfully knocked a groovy piece of wallpaper specifically for those WinXP users who like the Teletubbies UI, but who think it's not anything like full-on enough. We're not suggesting that just changing your wallpaper is going to sort out the whole UI of course, but it may fire you with suffiicient enthusiasm to craft a few more appropriate icons to go with it. This should whet your appetite: Having cast our eyes over the wallpaper, we couldn't help noticing that there might just be a small chance of a company, a person or two maybe thinking they might own some of the bits. So although we thought it was a great pice of wallpaper we bravely suggested Digital Ink host the download themselves. Note the 'please don't shoot' message down at the bottom. ®
You get a better class of divot in the helpdesk business, according to an ICM Research survey carried out recently for support specialist outfit Touchpaper. Despite constituting less than 5 per cent of the workforce, board level users apparently chalk up a whole 25 per cent of calls to company help desks concerning mobile applications and devices. Touchpaper very decently suggests that this is simply because they haven't been given adequate levels of training, which we suppose is true enough as far as it goes. However, writes a Register board level user, they're also to a great extent pig-ignorant, self-important gits who claim they're too busy, but who're really too scared of being made to look like prats, to go on training courses. It's also worth noting that board level users are far more likely to to acquire expensive mobile computing toys without having any real mission-critical need for them, and hence the motivation to master them properly; which is what the field force is likely to do. According to the survey, 70 per cent of help desk and support managers feel inadequate training has been given to those using mobile technologies, while another 15 per cent said the majority of calls to support teams concerned using the devices and software remotely. Touchpaper worldwide sales and marketing director Lee Chadwick agrees with us on the self-important bit, although perhaps a tad less vehemently: "Due to the nature of their role and level of seniority, board level users expect immediate support. This inevitably causes a considerable strain on help desk resourcing, and knocks the priority of other support incidents out of kilter. "Mobile working technologies are an asset to any company, however, to get the most out of them, practical training has to be given to all staff, no matter how junior or senior." ®
A remotely exploitable buffer overflow glitch poses a risk for AOL ICQ users who have failed to apply a security fix, CERT warned yesterday. It says attackers who are able to exploit the vulnerability may be able to execute arbitrary code with the privileges of the victim user. An exploit is known to exist, but it is not believed to be widely distributed. Nor is there any evidence of crackers scanning the Internet in search of vulnerable machines. Since ICQ is used by an estimated 122 million users, the vulnerability is still a concern. The buffer overflow, which affects AOL Mirabilis ICQ Versions 2001A and prior, occurs during the processing of a Voice, Video & Games feature request message. As with the AOL Instant Messenger AIM vulnerability (discovered earlier this month), AOL has modified the ICQ server infrastructure to filter malicious messages that attempt to exploit this vulnerability. However exploiting the vulnerability through other means (man-in-the-middle attacks, third-party ICQ servers, DNS spoofing, network sniffing, etc.) may still be possible. AOL Time Warner is recommending all users of vulnerable versions of ICQ upgrade to 2001B Beta v5.18 Build #3659. ® External links CERT Advisory: buffer overflow in AOL ICQ Related Stories Google calls time on AIMSearch prank AOL bungs buddy-list security hole AIM gives up control of Windows machines AOL buddy-hole fix has backdoor AOL/Netscape sues MS AOL shadows Microsoft on instant alerts
UpdatedUpdated Online auction site GunBroker.com is recovering after its "worst two days" ever were spent repairing the damage when its EMC disk array, which is supposed to guarantee 100 pe rcent uptime, failed. A message on the site details the travails GunBroker.com went through when its arrays went titsup last weekend. "It took EMC 24 hours to get it back online, and when they got it back online they corrupted our database," the message states. "Although we have tape backup the tape runs at regular intervals and the crash occurred at the worst possible time. Everyone here worked 48 hours straight to restore the damaged data as fully as possible." Having invested heavily in its infrastructure, GunBroker.com is keen to discover why its EMC disc arrays failed and what it needs to do in order to avoid any repetition of the problem. GunBroker.com has apologised to its customers about its outage and advised them to check recent auctions to make sure their listing or bid are still there. It has extended auctions and waived the final value fee on items listed between Monday and Wednesday this week for its business users. ® Update EMC disputes this version of events and says its systems were not responsible for the outage. The storage giant said that Gunbroker.com does not own an EMC array or any other EMC system. “Gunbroker.com outsources most of its IT operations to a service provider, an EMC customer, which experienced the January 18 data centre outage,” EMC PR manager Greg Eden said in a prepared statement. “While EMC customer service was involved in diagnosing the problems experienced by the service provider, EMC storage systems were not the cause of the outage,” Related Stories EMC: Eating Shark Fin Soup EMC spreads software wings IBM+EMC = win-win or dog's dinner? EMC slashes 2,400 jobs EMC reveals price cuts, almost MemoWatch Get touchy-feely with EMC and Lucent
Telstra has admitted rigging a poll in which consumers were asked whether the Australian telco charged too much for Internet access. Two hours after ZDNet Australia posted the question "Does Telstra's BigPond Internet service provide value for money?" 25 people said "no" with just one saying "yes". Amazingly, half an hour later and the "yes" vote had soared to a whopping 287. According to ZDNet, the massive swing was traced back to Telstra. The Aussie telco is now looking into the matter but insists this was not a "Telstra endorsed initiative". Earlier week The Register contacted Telstra to comment on news from Australian broadband community site, Whirlpool, that Telstra is planning to raise broadband pricing sharply for residential broadband Internet users. Curiously, we've yet to receive a reply. No doubt Telstra's PR team is far too busy handling this cock-up to respond. ® Related Link Telstra busted rigging user surveys
LettersLetters When we first started to learn about SMP systems, our first thought was - gosh, don't they go on about caches a lot? But it isn't just in parallel processing where the hairy business of cache coherency is a problem, as we've seen with the AMD Linux bug blame game. This has affected uniprocessor systems, and I was stumped. But Reg readers have provided a wealth of detail, and what follows will take you from a bird's eye view to the low-level nasties. Jud Leonard provided the best summary: There's no way for the OS to know when it should do that cache flush. The flush would have to be done after the speculative write, but before the AGP attempted to write the same word. And the software doesn't even think that the code which did the speculative write ever got executed. It was something the processor did to get ready for instructions it thought the program was about to execute, but then the program branched off somewhere else. I think one can make a case that this is a design error in the Athlon, though it is arguable either way. Similar problems have come up in the Alpha Ev6, and I would assume, most out-of-order processors. The page size option matters because if you're using 4k pages, the processor doesn't have valid mappings for the pages that are being used by the AGP, so those speculative writes to bad pages don't get performed, and the coherence problem doesn't arise. Or as John Riddoch summarizes: My understanding is that the CPU will happily page blocks of 4MB into cache if the pages are set to be cacheable. Unfortunately, this 4MB can include some 4k pages that the GART is using, and isn't cache aware. So, the OS/CPU caches a 4MB block of data which includes one or more 4k blocks used by the GART. Before the CPU/OS pages this back to main RAM, the GART changes one or more of these blocks but this change is merrily overwritten with "stale" data. If you switch to 4k blocks, the OS will never cache any of these as any GART data will use up a whole 4k block (at least, the block will be marked as used). Need more detail? Read on. This note from Lawrence D'Oliveiro from New Zealand details why the 4MB page triggered the problem: The problem lies in conflicting accesses to a block of memory by both the AGP processor and the CPU. The problem is more likely to occur with a 4MB page size, I assume because the large page size makes it more likely for the CPU's memory mappings to collide with the AGP processor's ones. A simple cache flush doesn't solve the problem, because all a cache flush does is explicitly force synchronization between the cache and main memory (synchronization which will normally happen at some point anyway). Because the memory block was marked for write access when it was loaded into the cache in the first place, this synchronization takes place by doing a write back to memory. Unfortunately, this clobbers data which was already written to the same memory by the AGP processor. Hence the problem. Though it does seem a bit dumb that an AMD CPU has to write back bits to memory even when they haven't changed... Richard Urich adds: With a 4M page, the OS may wind up assigning memory address X to AGP. However, some data may end at address X-1. This means if a loop is writing data from X-100 to X-1, the processor will likely mispredict when you are done and by accident think it is also writing data to address X. It will of course realize it's mistake and not write to X, but the data will already be loaded into cache. Then when the Athlon finally figures out it's not going to write to X, it will put it's cache value back to X leaving you able to experience problems. The problem occurs when a 4M page is being used by more than 1 thing, some of which are cacheable and some of which are not. With 4K pages though, only 1 thing should be using any given page. As for the flushing, I would think you could invalidate but I'm not sure how easy it would be to tell when you need to, and you would need a pretty big guarantee nothing useful was on that cache line. It's probably better to just move AGP to a page set to non-cacheable. With a 4M page, the OS may wind up assigning memory address X to AGP. However, some data may end at address X-1. This means if a loop is writing data from X-100 to X-1, the processor will likely mispredict when you are done and by accident think it is also writing data to address X. It will of course realize it's mistake and not write to X, but the data will already be loaded into cache. Then when the Athlon finally figures out it's not going to write to X, it will put it's cache value back to X leaving you able to experience problems. The problem occurs when a 4M page is being used by more than 1 thing, some of which are cacheable and some of which are not. With 4K pages though, only 1 thing should be using any given page. As for the flushing, I would think you could invalidate but I'm not sure how easy it would be to tell when you need to, and you would need a pretty big guarantee nothing useful was on that cache line. It's probably better to just move AGP to a page set to non-cacheable. Erich Boleyn offers even more detail: When the following 3 conditions occur: 1. memory is marked by a page table mapping as "cacheable". 2. the mapping is actively in the data TLB (i.e. not a TLB miss), and doesn't need to be fetched. 3. an instruction speculatively writes to data in that page (note that this might be in indirect memory reference using a predicted, but incorrect address! so it could be the instruction wasn't really intended to write there... I'm not exactly sure what their boundary conditions to issue a cache-line fetch here are). ...then the Athlon series of processors mark the cacheline being loaded from the bus as dirty, even though it may never get new data written into it. All dirty cache-lines must be written back at some point. When this happens for a memory region which was supposed to be uncacheable (and doesn't participate in the cache-coherency protocol, like the AGP controller), then it may overwrite something else that was placed into that region, or if it was uncacheable memory representing, say, a memory-mapped I/O area for a device, then who knows what the consequences would be. The reason the AGP GART mapping in the chipset has such a problem with 4MB page mappings in the CPU is that in the standard usage model for 4MB page mappings, you just map ALL of RAM with them to reduce the number of TLB misses, and then the subset of RAM which gets used by the AGP GART overlaps them. 4KB mappings marked as cacheable would still be a problem, but both: a) the likelihood of them being in the TLB at the time a speculative instruction comes along that wants to write to that area is small. and b)the likelihood of that particular page being currently mapped in the kernel/userspace as cacheable is much smaller. To my knowledge, no other processor (certainly no Intel x86 processor), even non-x86, has this "feature", and therefore would not have this problem. Technically, according to the specs for how cacheability and page mappings are described, the OSes/software in question should be written such that any uncacheable/incoheret area is NEVER marked by page mappings as cacheable, but because of the way Intel implemented theirs (and earlier AMD/other vendor x86-compatibles), people were sloppy and got away with it. Finally, after a similarly exhaustive account of the problem, regular Tom Walsh concludes, It stretches the limits of my imagination to call this an OS problem." Yes, but all that trouble with AGP on a single processor system. What's it going to be like on 2 way? Thanks to all who wrote in. ® Related stories The Linux-AMD AGP bug - who's to blame? AMD chip bug snares Linux users
A fine scoop by the San Jose Mercury apparently confirms the existence of Intel's 64bit Plan B, codenamed Yamhill. According the Merc, Yamhill adds 64bit instructions to the existing x86 architecture, and may appear in the Prescott chips, "with an option to turn the features on or off." The emphasis is on 'may', as according to the former Yamhill engineer, no decision has been taken to proceed with Plan B. It's not clear whether this is a new core, or simply new instructions onto the P7 core. But it makes Itanic a harder sell then it already is: and Compaq and HP, who have pledged to end of life their own 64bit architectures in favour of IA-64, may well be wondering if they pulled the trigger too soon. Itanium faces performance comparisons not just from the RISC rivals - with Sun and IBM investing heavily in their 64bit processors - but from Intel's own x86 line. This is often overlooked - but one of the consequences of the size and complexity of IA-64 is that it's the last to benefit from process improvements - McKinley will debut at 0.18 micron, when P4 is tooled to 0.13. With IA-32 getting SMT, Intel's own x86 line would continue to provide stiff competition for the 64bit big brother. And of course AMD will bring the Hammer - 64bit, backward-compatible x86 chip, to market., Although it's reasonable to assume that Yamhill features would make it harder for Intel to justify ongoing investment in IA-64, much of the hard work has been done in the ISV community. We'd be surprised if a parallel skunkworks to create a more economic, and thus more marketable Itanic, isn't already underway. For example, an Itanium without the complexity of the 32bit compatibility. Sun (with MAJC), Transmeta (with Crusoe) and PACT (with XPP) all use explicity parallel VLIW architectures. Even if Intel decided that IA-64 wasn't ready for prime time, it has built up plenty of expertise in EPOC processor and compiler design and, would be loathe to throw this out, too. ® In the absence of official word on Yamhill, we instead provide you this opportunity to look at the Yamhill Juvenile Offenders Detention Facility. ® Related Story Do not feed, poke or disturb the Itanic user