Intel today relaunched its IA-64 architecture, moving the processor once codenamed Merced but now officially branded Itanium much, much closer to the mainstream desktop PC market than Chipzilla has previously suggested. At last year's Microprocessor Forum, Intel spokesfolks poo-pooed claims that Merced was anything more than a high-end architecture aimed at big league, 64-bit applications, specifically databases and operating systems. Not anymore, it isn't. This time round, Intel's principal engineer and IA-64 microarchitecture manager, Harsh Sharangpani, clearly positioned Itanium very much in the mainstream "commercial" server and workstation markets, stressing the chip's Epic (Explicitly Parallel Instruction set Computing) architecture's benefits for everything from digital content creation to encryption and security roles for e-commerce and other "Internet applications" (Web browsing?). In essence, then, Itanium will push right down into the space currently occupied by Intel's Xeon line, possibly even to the extent of supplanting it. And for anyone who thinks that's as far as it goes, Itanium will offer full MMX and Screaming SIMD Extensions (SSE) support in addition to full IA-32 compatibility. That was always part of the Merced gameplan, but Intel does appear to have upgraded its x86 support to something more than mere instruction set emulation. The Pentium family has for a long time supported x86 only as a kind of abstraction, decoding at runtime x86 instructions into something more akin to Risc. Itanium will take the same approach, but unlike the original Merced spec., IA-32 code will be run by the IA-64 execution core itself. That, Sharangpani promised, will ensure "full Itanium performance on IA-32 system functions". Much of that performance -- at least at the 64-bit level -- will come not from the chip per se, but from highly complex compilers turning source code into object code structured to play to Itanium's architectural strengths. Enhanced compiling was always part of the Merced strategy, but it's clear from Sharangpani's comments that to get the full benefit of the new CPU, software developers will need very highly tailored compilers indeed to keep the chip's streamlined pipeline fed with instructions and data. It certainly sounds as if Intel has stripped away much of the functionality a modern processor uses to get as many operations as it can to run in parallel and to make sure it follows the right branches in program code in the process. Instead, compilers will figure most of this out before the code ever gets to see a processor that can run it. Not that the chip doesn't perform some dynamic optimisation at runtime -- there's some powerful branch prediction work going on within the Itanium core and plenty of code pre-fetching -- but it's clear the focus is very much on fine-tuning software before it's run: the compiler is expected to build hints into the code telling the processor which chunks of instructions to pre-fetch. Intel has followed the now familiar path of placing the L2 cache on the die and supporting off-chip L3 cache (up to 4MB) on the processor daughtercard. How fast the L3 cache or, for that matter, the Itanium itself, will operate, Sharangpani refused to say. In fact, for a Forum presentation rather longer than is usually allowed, Sharangpani gave away very little solid data, sticking solely to the more technical elements of the chip's operation. Itanium will go into production "mid-2000", but with testing and tuning taking place throughout the first half of next year, whether that's volume production will remain to be seen. Sharangpani made much of what he called "strong progress on initial silicon", but making progress (no matter how good) isn't the same thing as finishing it. ®
AMD followed yesterday's launch of the 700MHz Athlon with the announcement today that it wants to push the chip into the more lucrative server and workstation markets. And in a bid to take Intel's own new server and workstation processor, Itanium (aka Merced) head on, the Chipzilla wannabe launched it's own move into 64-bit computing: x86-64, a 64-bit extension to the AMD's existing 32-bit instruction set architecture (ISA). The move into the server and workstation worlds is clearly part of AMD's attempt to be perceived as something more than an x86 knock-off merchant. The success it has had with its 3D Now! technology and Athlon itself have helped here, but getting into more 'serious' markets should, the company hopes, prove once and for all that it's not about chasing marketshare in the low-end PC market. More pragmatically, AMD will continue to suffer badly from Intel's aggressive pricing policies while it stays in the mainstream desktop market -- moving up beyond the even the performance PC sector would go a long way to stemming all that red ink AMD is haemorrhaging. It's approach essentially boils down to an 'Athlon Xeon' part. The modified Athlon will offer better support for multi-processor systems, expanded L2 cache support -- up to 8MB of it -- and a faster, 266MHz front-side bus. How much faster than the current Athlon the new chip will be remains to be seen. The faster FSB will clearly help, but most of the chip's anticipated (by AMD) near doubling of the current 700MHz part's performance depends entirely on "projected" compiler improvements, so we'll have to wait and see. Central to Athlon's extended multi-processor role will be what AMD is calling the Lightning Data Transport (LDT), which was designed to provide a single, unified connection mechanism linking processor and North Bridge to multiple bus technologies -- PCI, System I/O (the combination of the NG I/O and Future I/O initiatives), etc. -- but neatly also serves as a multi-processor communication channel. AMD will offer the new Athlon in a dual-CPU module which connects both processors to the North Bridge chipset (via an Alpha EV6 bus) and from thence to the AGP graphics card and the system DRAM. Plug four of these together and -- bingo -- you have an eight-way MP system. LDT is a point-to-point interconnect providing a throughput of up to 6.4GBps each way and a channel width of 8, 16 or 32 bits. The 'Athlon Xeon' and LDT should ship sometime in 2000, the company said. Later, AMD will introduce its x86-64 ISA which will involve a further modification of the processor -- assuming it's not being held back for K8, of course -- that is essentially a 64-bit version of its current x86 implementation but allows the processor to behave as a 32-bit chip, when it's running existing 32-bit apps. x84-64 will also sport some additional "specialised operations" to the x86 set and add what AMD calls "technical floating point instructions", which appear extend 3D Now! to something closer to Motorola's AltiVec technology than Intel's SSE. With the 64-bit chip, AMD is clearly anticipating Intel pushing Itanium closer to the mainstream market than Intel is currently claiming. AMD's pitch is that x86-64 will offer all the benefits of 64-bit computing without having to work with a completely new ISA or limiting the performance of legacy applications. AMD VP of engineering at the company's Computation Products Group, Fred Weber, reckons solid IA-32 support will be key since almost all applications other than operating systems and databases will never be 64-bit. True, but since Intel too now seems to realise this and has upgraded its Itanium IA-32 support accordingly, AMD's lead here may not prove so solid. However, AMD is also promising multiple x86-64 cores on a single chip, which is something it could possibly do well before Intel gets any multi-core Itanium processors out of the door. ®
IBM did indeed put some flesh on the bones of its Power4 processor, codename Gigaprocessor, but the revelations focused solely on the upcoming chip's architecture rather than less technical but more prosaic information like, well, host fast the damn thing will be. Speaking at Microprocessor Forum, Big Blue's Power4 design chief, Jim Kahle, did reveal that the chip will contain a two processor cores plus a shared on-die L2 cache and control circuitry for an external L3 cache. Bandwidth is clearly all in IBM's eyes, and Kahle reeled off a stack of statistics such as the die-L3 line has a data throughput rate of over 40GBps, while the core-L2 line can handle over 100GBps. Each Power4 contains a chip-to-chip communications module to enhance multi-processing systems, and these modules operate at over 35GBps. The point here is that the chip and its architecture have been designed from the ground up with server roles in mind -- as Power development head Charles Moore said when he introduced the chip at last year's Microprocessor Forum -- and servers are primarily about moving information from one place to another and that, in turn is primarily about bandwidth, especially in Internet roles where usage tends to fluctuate rapidly, with frequent high bandwidth demand peaks followed by periods of relatively low usage. As Kahle put it: "Our design philosophy has been to get the right data to the right place at the right time." Kahle didn't state how fast the Power4 will actually run, beyond a broad 'greater than 1GHz', something everyone could pretty much guess from the chip's codename. However, he did confirm that the beast contains over 170 million transistors -- much of it devoted to the on-die L2 cache -- to the accompaniment of awed whistles from the Microprocessor Forum audience. According to Kahle, Power4 will sample early next year, with systems based on the processor shipping Q1 2001. IBM will offer Power4 on four-chip modules, effectively providing eight-way multi-processing through a single approximately 10cm x 10cm unit. ®
Microsoft is spending more on online advertising in the UK than any other business, according to a study by Internet AdWatch and Fletcher Research. Not that the sums involved are vast. In June (the latest figures available, apparently), Microsoft spent 141,000 pounds, pipping IBM's 124,000 pounds. Intel was a more distant third, with a spend of just 74,000 pounds - but we doubt whether pushing the Intel brand name today to the punters gives any significant increase in sales. Media groups like Bertelsmann (4th) and Pearson (6th) are also quite big spenders, along with BT (5th). HP spent 55,000 pounds to capture the 8th spot. Next was Amazon, spending 52,000 pounds. In tenth place was the first non-IT industry advertiser: BMW, weighing in at 48,000 pounds - not a sum that would buy many of its motor cars. ®
A string of top US firms have filed hundreds of millions of dollars worth of millennium bug claims in court against their insurers in a move that threatens to expose the entire industry to billions of dollars in claims. Research published by boutique investment bank Fox-Pitt, Kelton says that three corporates have filed $598 million of contested claims against a range of insurers. The firms are copier giant Xerox, telco GTE and computer firm Unisys. The claims, revealed in the London Evening Standard, come as a surprise. In the run up to the millennium, insurance firms had repeatedly claimed that the millennium bug was in principle uninsurable as it was a foreseeable event (although some of the consequences of a bug failure were covered). To aid their cause insurers devised specific millennium bug related exemptions. Other firms are understood to made claims include a subsidiary of JP Morgan the investment bank and Microsoft. There is no evidence yet of any bug claims being made in the UK. Official bug buster Action 2000 said it "was not aware of any claims being made in the UK". Fox-Pitt, Kelton, which is owned by re-insurer Swiss Re, estimates insurers could be exposed to as much as $77 billion in claims. ®
The trouble with IT service companies -- and one of the reasons we write so rarely about them -- is that you can rarely be sure what they actually do. (Except charge a lot of money.) Take Xteam Ltd for example -- a British business assurance consultancy company that Compaq announced it bought yesterday. What is business assurance and why do you need consultants? We've read the press release and we've read it again but we're still none the wiser -- except that we now know that Xteam is to be called Compaq Assure, and that its purchase ensures that Compaq takes the lead in something called Non Stop Computing. Compaq Assure delivers "business solution programme management. Its services will range from specification, testing and operations proving to platform configuration, availability, and IT operations. Compaq Assure will include specialist services for organisations, such as utilities, that are required to demonstrate compliance with regulatory requirements". Come on guys give us a break. Speak proper English. We've got too many stories to write, to spend time on translation.
Software escrow, essentially a way of preserving source code by depositing it with a third party, is becoming increasingly fashionable as small developers seek to license software to large organisations. An op/ed in the current issue of Dr Dobb's Journal by Andrew Moore makes a case for software escrow agreements. Moore of course works for a firm that undertakes this. There are several situations in which this could be a useful thing to do. Moore quotes a case where Amoco had a project that would cost $10 million to deploy and involve 15,000 people. The best software for the project was from a small developer, so Amoco went ahead but with an escrow agreement in place that provided a means for the company to acquire the source code if the vendor was taken over or went out of business. A clause also gave Amoco the right to decide when a release condition had been reached, with arbitration or litigation safeguards. The tale ends with Amoco deciding it had to exercise its rights, and obtaining the source code in a couple of weeks. Escrow agreements can be two-party or three-party. In the first case, multiple end-users can be covered by an agreement between the vendor and an independent party, with the end-users being beneficiaries in defined circumstances. Third party agreements tend to be used where customisation of the escrow agreement is necessary. Deposited copies of software versions can also be used to prove when particular code was distributed, either to protect against claims or to prove ownership of the intellectual property at a particular date in defence against a claim. Another use may be in the authentication of software if a software vendor needs to show ownership in merger or acquisition talks. Aside from issues about whether copyright is an appropriate way to protect software, anything would be better than the foolish and arbitrary decision of the US Copyright Office to require only the first and last 25 pages of a work to be put on file. Maybe software escrow will remain as rare as fully-documented software, but in these litigious times, it could be quite a growth industry. ®
Leading UK building society turned bank, Halifax, is taking the lead in Internet banking. It is investing £100 million to establish a stand alone Internet and phone based banking operation, which will see it become the largest UK banking name to set up on the Net. Others from the UK financial sector to embrace the Internet as a direct banking channel, include the Prudential with its Egg division and the Co-Op Bank with its bizarrely named Smile. Smile being the last thing most people feel like doing when their bank statement arrives. The Halifax has yet to release details of the new division, which is thought to have the codename Greenfield. The bank expects direct operations to generate around 10 per cent of group revenues within three years. ® For more bang for your buck, visit Cash Register and get the latest Net finance news.
Six months on from its last major assault on Linux, Microsoft has returned to the fray with a "Linux Myths" page, here. The content isn't exactly original, but it makes it clear first, that Microsoft sees Linux as serious competition, and second, that it's targeting areas where it thinks it can score PR and marketing points against the upstart. The "myths" are as follows: Linux performs better than NT; Linux is more reliable than Windows; Linux is free; Linux is more secure than NT; and Linux can replace Windows on the desktop. The benchmark battles earlier this year made it clear that performance was one of the issues Microsoft thought it could win on, so it's no surprise, with a couple of wins under its belt, that the company's pushing this one some more. The truth as regards performance really depends on where you're standing - from some angles Microsoft has a case, but considering how NT scales against Unix Risc boxes, this really is a case of pot calling the kettle black. The other categories are obviously new ones Microsoft has decided to work to establish, and in at least three cases there's a certain heroism involved in the bid. Security has been pretty much a marketing debit for Microsoft in recent months, so trying to establish NT as clearly more secure than Linux Microsoft is doing a bit of fire-fighting as part of the pitch. In reality Microsoft ought to have a fair bit going for it here, as NT was designed to be secure. But if security holes keep popping up in various Microsoft products, the public will have trouble swallowing the security message. What's this about it being a myth that Linux is free then? How do we make that out? Ah, Microsoft is playing the TCO card. Studies prove, MS tells us, that Windows NT has a 37 per cent lower cost of ownership than Unix. And Linux is really Unix. But other studies "prove" the reverse, so we can just call this a marketing spend war, and pass on to the desktop. Here the pitch is lack of hardware support, plug and play, complexity, clunky GUIs and lack of applications, i.e. the usual stuff. Some of it at least has some historical justification, but as Linus Torvalds himself has made clear many times, the desktop is somewhere Linux is going, rather than a destination it's arrived at. The significance here is that Microsoft clearly fears Linux as a future competitor here, and so is trying to establish its defence strategies. ®
Thanks to the reader who sent us Nortel Network's Come Together TV ad schedule, and no thanks to the reader who asks if 30,000ft sex shame Mandy who got Randy on Brandy has lost her visiting rights to Nortel's Maidenhead office. No sniggering at the back! Especially: Thursday 7th October (Carlton) 20.00 - During We can work it out Sunday 17th October (LWT) 20.27 - During You've Been Framed Thursday 21st October (Carlton) 00.00 - During Pulling Power -----Original Message----- From: Lejeune, Chantelle [MOP:6900:EXCH] Sent: 06 October 1999 11:53 To: All Maidenhead, Harlow, New Southgate, London employees Subject: "Come Together" TV advertisement Audience: All Maidenhead, Harlow, New Southgate, London employees (bulletin to 4798 recipients) Nortel Networks' "Come Together" TV advertisement will be running again in Europe in the U.K. on ITV Carlton (London region) station and LWT. The ad will appear at the following times (all spots are subject to movement and pre-emption.): Thursday 7th October (Carlton) 20.00 - During We can work it out 00.27 - During Videotech Saturday 9th October (LWT) 18.44 - During Rugby World Cup Monday 11th October (Carlton) 22.40 - During Real Life Tuesday 12th October (Carlton) 18.45 - During ITV Evening News 00.27 - During Carlton Sport Thursday 14th October (Carlton) 13.45 - During Rugby World Cup 20.20 - During The Bill Sunday 17th October (LWT) 20.27 - During You've Been Framed/Heartbeat Tuesday 19th October (Carlton) 23.00 - During The Big Match Wednesday 20th October (Carlton) 00.27 - During Rugby World Cup Highlights Thursday 21st October (Carlton) 21.20 - During Taggart 00.00 - During Pulling Power Saturday 23rd October (LWT) 15.45 - During Rugby World Cup Tuesday 26th October (Carlton) 19.59 - During National TV Awards Wednesday 27th October (Carlton) 22.40 - Tonight Trevor MacDonald 23.50 - The Big Match Sunday 31st October (LWT) 20.50 - Heartbeart
Microsoft may be a lot closer to carving itself a share of the mobile phone market than it looks. Today British Telecom announced that the two companies would be starting a trial of wireless Internet services with a view to kicking off a live service early next year. The service is being pitched at corporate customers initially, and four - the BBC, Credit Suisse First Boston, KPMG and Nortel Networks - are taking part in the trials. These will "test the ability to send and receive email as well as access their Microsoft Exchange-based calendaring, address list, personalised web content and online information services from their mobile phones, over established radio interfaces." The system will clearly support Microsoft's strategy of getting BackOffice and its server products established as the standard in the cellular industry, but despite the support of BT, it's still a bit of a punt. The handset equipment for the trials is coming from two companies, Samsung and France's Sagem. Samsung has designs on the Nokia Communicator-type market, although the designs it's shown have tended to be somewhat trailing edge compared to the more svelte Nokia ones. Samsung also has the problem of a serious CDMA commitment (like Korea as a whole), but it does make GSM products too. Sagem is a GSM manufacturer, with deals with France Telecom and a recent one with Vodafone, but like Samsung it's by no means a top tier mobile phone outfit. We can't help noticing a recent innovation, a Sagem combo GSM phone and FM radio, but more seriously, in May the company signed a deal with Microsoft to develop CE-based GSM Internet access handsets. These were to be GSM 900, DCS 1800 and PCS 1900, so we can presume it's more or less ready to roll with worldphone type devices based on the three GSM variants. To be fair Sagem's mobile phone activities are also only the tip of the iceberg. The company produces all sorts of terminals, defence and automotive equipment, so we could maybe see the MS JV coming with added appliance capabilities. Presuming Sagem does the GSM end of the deal, that probably leaves Samsung with the fuzzy end of the lollipop, CDMA. Still, it's big in Korea. Just successfully completing trials and getting the service on the market however won't mean automatic success. As we've noted, neither of the manufacturers involved is top tier, so we can anticipate a curious parallel universe (i.e., the mobile phone market) where Microsoft's Windows everywhere strategy is turned against it. The client will be driving, and in this case the big-selling clients will be from Nokia, Ericsson and Motorola, all of whom support Symbian, not CE. And these companies have their own ideas about what goes at the server side. Curiously, although Wirelessknowledge, the MS-Qualcomm joint venture company, is supposed to be working in pretty much the area covered by the current trials, it's not involved. Even more curiously, we note that Qualcomm last month launched a family of ARM-based semiconductor products for smart phones and communicators which supported both CE and Symbian. Redmond may take a dim view of that. ®
UK airline British Midland has set up an online auction for its empty flight seats - with bids starting at £45. Fast and furious it may be but don't expect to leave for at least a week because your tickets won't have arrived before then. So far, British Midland has had eight three-hour auctions and plans to run one every two to three weeks, with 100 to 300 flights available each time. One lucky punter recently got two tickets to Budapest for £55 each. However, British Midland's hi-tech credentials were called into question when The Register checked out the site. Yes, you can bid in near real time, but if successful, things slow down considerably. There is a one week minimum limit between bid and flight to allow the tickets to arrive by post. Quizzed as to why it hasn't adopted the popular e-ticket approach, the company told us it wanted to avoid causing congestion at ticket desks, with winning bidders getting in the way of full-price customers. The time and date of the next auction will be emailed to registered users a week before it takes place, although we hope it's more efficient than the on-screen version which told us the next one would be yesterday. Bidding works in much the same way as other online auctions, with a continually updated status report with the names of bidders displayed. British Midland also offers an automated bid service which will continue raising up to a maximum that you set. The seat auction certainly looks like a good idea. Get on it before it becomes popular and grab a bargain. ®
Mesh Computers has won the Personal Computer World magazine award for service and reliability. It beat off rivals such as Dan and Evesham, and even pipped Dell to the post. Paul Kinsler, general manager of Mesh, was delighted at the result: "It is rewarding to see that our investment in customer service systems and personnel is genuinely delivering improved service in the eyes of our customers." The award takes into account quality, reliability, cost, customer service and technical support and was set up to signpost good companies to otherwise unaware consumers. The award was decided by a poll of PCW's readers. Mesh came out top with readers quoting trustworthy advice and well-trained sales staff as major factors. ®
StorageTek has added Storm to its list of UK distributors as part of its plan to attract resellers to its Shared Virtual Array (SVA) high end product. The company said its SVA storage device would be available to the UK channel from today. It follows the break up of StorageTek's deal with IBM in April, in which Big Blue was rebadgeing SVA as its own OEM disk array product, RVA (Ramac Virtual Array). The joint venture had around 6,000 customers worldwide. IBM is allowed to sell RVA until the end of this year. It will then promote a rival version using its own technology, codenamed Shark. Since the split with IBM, StorageTek has been searching for a distributor for SVA, said StorageTek indirect sales manager, Peter Yarwood. It not only plans to compete against IBM, but will also battle with direct storage seller EMC. "We are actively looking for channel resellers," said Yarwood. "It is never easy to take on such a dominant partner as EMC. But we can recruit channel partners to sell StorageTek product that can't sell EMC product. The way to attack EMC is through the channel." Storm will take on the entire range of StorageTek products, and hopes to add around 15 new resellers to StorageTek's existing 35 channel partners. The storage distributor currently also sells product from Sun, IBM and Compaq. It will continue to sell IBM's Shark product. Bryn Sage, Storm sales director, said: "Storm see this as a StorageTek relationship which underpins our storage-only focus and allows us to offer the widest range of products and services for the corporate reseller and their customers. "You could say it's the final piece in the jigsaw. We now have the most complete storage offering in the UK." StorageTek sells its open system products in the UK through distributors Ideal Hardware and Transformation. Its major resellers are MTI, ICL and Computacenter. ®
Bouts of amnesia are not uncommon in the fast-moving world of IT, but when Compaq announced its was launching what it called its first thin-client products, there was a definite niggling in the back of our minds at The Register. Is this the same Compaq that commissioned a rain-forest's worth of research into why the network computing model was doomed to fail and held a series of press conferences to tell the world's press how wrong NC advocates everywhere were? Apparently so. The lengthy delay in producing its first thin-client has been put down to the fears that it would cut into PC sales, but after sitting back and watching the success of its competitors, many of whom were scoring deals off the back of Compaq server deals to major accounts, Compaq has finally taken the plunge. Yesterday it announced the T1000, which supports NT and Citrix, and T1500, which runs on Linux. The company should have listened to Craig Barrett, Intel's CEO. Last September, he claimed it was only Oracle's head, Larry Ellison, who hadn't heard of the NC's death. And talking of U-turns, isn't it odd that just last month Oracle's COO Ray Lane spoke of his desire to drop NT and stated that the company had no plans to use Linux as a platform. Bandwagon anyone? ®
Sun has unveiled details of its new Java-based MAJC microprocessor which will be able to deal with several different channels at the same time - ideal for multimedia applications where sound, video and graphics are used simultaneously. MAJC first stuck its head over the parapets back in August. It's not the first time that a chip has been developed with just multimedia in mind - Intel's MMX was a notable failure and was quickly superseded by the Pentium II - but Sun believes expanding Internet use and an increase in its capabilities will make the new 500MHz chip a business and consumer success. According to Sun, the MAJC 5200 (Microprocessor Architecture for Java Computing) chip can decode two different real-time video streams while also downloading an audio track. One expected use of this added capability is in video conferencing, where several participants would be visible at the same time. The chip is expected to volume-ship in the second half of next year and the first MAJC PCs are likely to be produced by Sun itself. This double-channel processing derives from a new chip design, Sun said, which uses two VLIW (very long instruction word) microprocessors onto a single piece of silicon. ®
Pay attention all you Trekkies out there, Time Computers has roped in Spock to star in its latest advertising campaign. Leonard Nimoy will be making his first appearance for Time tonight at 7:45, while Coronation Street's on. Using the tagline that its PCs are Time machines (and after a fashion, that's exactly what they are) the company has moved on from last year's ad campaign, which featured happy looking customers buying PCs from helpful looking sales staff. In a release sent to The Register's offices this afternoon, Time said: "Nimoy acknowledges that time travel is not yet possible, but explains the future is available through a 'Time Machine' PC." The advert, we are told, was shot on location in Los Angeles, at the Mount Wilson Observatory – as featured in the films Deep Impact and Armageddon. The campaign will run until Christmas and promotes the £35 million software give-away Time is running in conjunction with the Times newspaper. ®
Intel took the wraps off its Coppermine "next generation... with performance optimisations" Pentium III chip at Microprocessor Forum today. Chipzilla project architecture manager Jim Wilson would only say that Coppermine will become available "later this month" at 700MHz or greater, but as The Register has already reported, the chip is set to ship on 24 October in at 733MHz. Wilson said the chip will be made available in standard desktop, Mobile and Xeon server/workstation versions simultaneously. Coppermine will feature 256K of on-board L2 cache and despite retaining the same P6 core that Intel has been using for the last five-odd years, operate at around 25 per cent faster than the current, Deschutes Pentium III operating on the same 133MHz front-side bus that Coppermine uses. According to Wilson, the improvement is due to the speed gains of bringing the L2 cache onto the die and upping the cache bandwidth, and increasing the chip's buffers to accelerate the flow of data through the processor. Coppermine's release was brought forward, primarily to tackle AMD's 700MHz Athlon. Wilson claimed the 0.18 micron chip was also highly scalable, with the processor easily capable of increasing to 800MHz and beyond, allowing Intel to keep up with whatever AMD comes up with in the near future. ®
Motorola's pitch at Microprocessor Forum centred not the recently roadmapped chip it's now calling the G5 but on a new, intermediate version of the PowerPC 7400 (aka G4) designed to help both Motorola and Apple play catch up with the Wintel world's increasingly way higher clock speeds. Shortly after last autumn's Forum, which hosted the first public announcement of the G4, it emerged that Motorola's next chip would contain multiple G4 cores. However, that chip, codenamed V'Ger, has now become the G5, and the next processor Motorola will offer will instead be a new G4 part that's more about getting improving raw performance through higher clock speeds than clever parallel processing techniques. PowerPC processors have traditionally been faster than IA-32 chips of the same clock speed, but such has been Intel's push for higher and higher clock speeds that the PowerPC has become very clearly outpaced. And with so many buyers focusing on clock speed as the be-all and end-all of PC performance, that's left Motorola and Apple with a real marketing problem on their hands. Intel's latest Pentium III, codenamed Coppermine, will ship later this month at 700MHz, allowing it to catch up with AMD's Athlon. And this is in the same timeframe as a 500MHz G4. To date, the PowerPC's clock speed has been limited by the size of its processing pipeline. The current G4 has a four-stage pipeline -- the path of an instruction through the chip -- and that's not enough to keep a 700MHz CPU fed with instructions and data. Motorola could simply up the frequency, but the processor would have so much idle time that the speed advantage would be lost. The second-generation G4 increases the pipeline to seven stages, and to counter the reduction in the number of instructions a processor can handle per second inherent in long pipeline, the company has increased the number of instruction processing units in the chip. According to Naras Iyengar, one of the chip's design team leaders, the new G4 will feature two extra integer units, taking the total to four, in addition to the existing floating-point unit and four AltiVec units. The AltiVec system has been enhanced to handle two instructions simultaneously, each being automatically passed to the relevant unit according to the type of data involved. Following a clear industry trend, the new G4 brings the L2 cache into the chip itself to allow it to operate at the same speed as the core. The L1 caches remain the same size -- 32K instruction, 32K data -- while the on-die L2 will be 256K, connected to the L1 via a fast 256-bit wide datapath (up from the 7400's 64-bit path). Like AMD's K6-III chip, the new G4 will also support a third layer of cache between the CPU and the main memory bank, in backside configuration. It will support up to 2MB of this L3 cache. The architecture will support up to 64GB of main memory thanks to a new 36-bit addressing mode. Full details of production chips based on the new architecture will be revealed next year -- right now, Motorola's staying silent on when the second-generation G4 will ship, but given the example of the original G4, we would expect so see product this time next year. Whenever they ship, the chips will be fabbed on a 0.13 micron process, and run at 1.5V for a typical power consumption of 10W. So how much faster will the 'G4-II' be? Iyengar's put the chip at 700MHz and up, with "significant headroom for where we want to go". Room for improvement will be essential -- with Intel at 700MHz now, Motorola is going to have to come up with something higher to compete with whatever Chipzilla has on offer when the new G4 finally ships -- particularly if it's going to satisfy Apple. ®
Motherboard supply will return to normal in December, a leading vendor forecasts. That's how long it will take for vendors to overcome the dislocations caused by the Taiwan earthquake and The Great BX Chipset Shortage in August and September. The industry had already been under supply constraints prior to last month's earthquake, Don Clegg, VP sales and marketing at Californian mobo maker Tyan Computer, notes. "In August and September, Intel under-estimated BX product demand," he says. "It was late with a number of product launches, and already a little late with Whitney product. That left no alternative but the BX chipset." But market confusion over Rambus and Camino has proved beneficial for VIA with its PC133 rival, according to Clegg. Intel "underestimated the task ahead of it in introducing Rambus into the market. The supply channel to the end user is not in place," he says. Intel also underestimated how easy it would be (for VIA) to introduce PC133. "With the exception of Rambus, PC133 has everything that Camino has got -- ATA/66, 4X AGP, faster memory, 133MHz FSB -- and it's more readily available." Tyan is sampling its S1854 VIA mobo with European OEMs, and anticipates the first round of product to ship here before the end of Q4. In the States, Micron has been shipping kit with 1854 boards since September. Tyan is also prepping the launch of its first Athlon mobo for Comdex Fall in November. The privately-held company subs out manufacture to OSE and Mitac, two Taiwanese firms. Production post-earthquake was affected -- but only lightly -- with OSE at the southern tip of the island, emerging unscathed, and the Mitac facility escaping structural damage. "We were more fortunate than most (motherboard makers)," Clegg says. Tyan currently pumps out 60,000 motherboards per month, and is targeting 80-100,000 units per month for mid-2000. ®
Red Hat is planning a dramatic shift in the focus of its business by - effectively - betting it on the Web. Speaking to The Register earlier today company COO Tim Buckley said: "Our goal is to become the definitive site for Open Source software." Over three to five years, says Buckley, Red Hat intends its business to split as follows: 30 per cent product, 35 per cent services and 35 per cent portal (i.e., the Web bit). Those numbers may be more than a little understated: confronted with them chief marketing officer Tom Butta chirruped "80 per cent portal" before being gently countermanded by Buckley. But it's clear that Red Hat is seriously keen on switching the company focus to the Web. You can get some perspective on this by noting that currently Red Hat's business is 80 per cent product. The company has, says Buckley, been running the plan past analysts over the past few months, and the reasons for the move are pretty convincing. The product itself is free, so there's an imperative to derive revenues from support and services (which is the open source model anyway). But as bandwidth availability increases, "you can download in seconds rather than 36 hours" - the product becomes even less of an issue, because you can get it instantly, and the mechanism whereby you get it, and what you do along the way, becomes far more important. So Red Hat is developing a combo e-commerce and content site which is intended to provide both that mechanism and an 'everything you want to know about open source' service. It's being worked on by Atomic Vision, the San Francisco Web design outfit Red Hat bought earlier this year, and the content side is being masterminded by an ex-Wired luminary. Funnily enough, the acquisition of "certain assets" of Atomic Vision was casually mentioned in Red Hat's Q2 report, and nowhere else. The report laconically says this has resulted in improvements to Red Hat's Web site, which would appear to have been something of an understatement. Former Atomic Vision president Matt Butterick has been Red Hat's director of Internet business since the acquisition in May. There are obvious gotchas to the scheme. Red Hat professes not to see other Linux distributions as competition right now, because they're all ramping as fast as they can go anyway, but "the definitive site for open source" has to be seen not to favour one company, and that doesn't sit well with Buckley's view that the Red Hat brand will be important to the portal. On the other hand, you might say similar about VA's association with a Linux site with similar ambitions. Still, figuring the difference between promoting Red Hat as the only true open source Linux (Buckley's words: "Everything we do is open source - that is not the case with our competitors") and Red Hat as the proprietor of the source of all open source knowledge and info, will be tricky. The Register, which knows a thing or two about content, Web operations and their funding, suggested that Red Hat must obviously have thought about the possibility of spinning the portal operation off and going for an IPO for it, but Buckley insists that this hasn't been considered. ®
Web entrepreneurship is all about eyeballs, and Florida Supreme Court seems to have struck the mother-lode by, er, frying them. The court's Web site has been bombed for most of today after it published pictures of Allen Lee Davis, who'd been 'prepared' in the State electric chair a little earlier. The site (Don't click, you sicko) is still unobtainable as we write, thank goodness. Learning from the experience (or not) a court spokesman said it had generated lots of emails from all over the world favouring the death penalty. "Thanks for putting together the beginning, middle and end to the process," said one. The end of the process, as we understand it, showed Davis' corpse with blood dripping from his nose onto his white shirt. Nice. Future Florida Web stars scheduled for the chair later this month are said to be protesting. The court itself, noting the effect the pictures have had on its site, is proposing corrective action. It reckons it should upgrade its hardware and software. ®
Head of Sony's PlayStation operation Ken Kutaragi today pledged to drive the technology behind the company's Emotion Engine processor line -- the heart of the upcoming PlayStation 2 -- way beyond that of Intel's Pentium family within the next six years. And he hinted at the rapid evolution of future versions of the PlayStation and its chip pushing its upgrade cycle into something more akin to that of the desktop PC. Right now, said Kutaragi, speaking at the autumn's Microprocessor Forum in San Jose, California, the latest Pentium III chip contains around 10 million transistors -- the same as the first Emotion Engine. Both are constructed using 0.18 micron processes, but as process technology pushes to 0.13 micron and beyond, Kutaragi claimed, the next two generations of Emotion Engine will eclipse Pentium's transistor count. Emotion Engine 2 is slated to appear in 2002 and will contain some 50 million transistors. Its successor, known by the ambiguous moniker Emotion Engine 3, will sport half a billion transistors shoehorned onto the die by a 0.1 micron process. Of course, the projected arrival of both chips suggests Sony is planning to update the PlayStation 2 rather more quickly than that machine will supplant the original PlayStation. The gap between PSX 1 and 2 is five years (1994 to 1999), but according to Kutaragi's presentation, the PlayStation 3 could appear in three years' time -- just two years after the debut of the PlayStation 2. The next-generation of Sony's Linux-based PlayStation development system is due in 2002, and it too appears geared to tie in with the release of the second version of Emotion Engine and PlayStation 3. Sony is describing the system not only as a video game development system but also as the basis for creating real-time digital entertainment content. Curiously, a second slide dates the PlayStation 2 launch to 1999, perhaps confirming the claim that Sony's original release date for the 128-bit console was the end of this year, and that the company was indeed forced to push the launch back three months to March 2000. As appealing as Sony's projections for the rapid evolution of the PlayStation are, getting a third generation out in 2002 seems optimistic. While the chip and hardware development programmes appear eminently achievable, it takes time to build up a user base for the current version, and that usually hinders the rapid release of new versions -- few people are willing to buy a platform that will not be supported in just a couple of years' time. Of course, Sony has an advantage here -- the PlayStation 2 will be the first games console to offer backwards compatibility. If future versions of the machine continue to offer that, users can ditch their hardware but retain their software investment, and that could easily persuade them to upgrade -- or at least to do so with less reluctance. Kutaragi reiterated Sony's broad plan to put the PlayStation 2 at the heart of digital home entertainment systems, ultimately as the medium through which digital music and movies are bought, downloaded and played, so if Sony can persuade buyers that the PlayStation 2 is more of a consumer electronics device than a computer, again that will make upgrades seem more attractive to the public. ® Related Stories Sony delays PlayStation 2 so kids can still sit tests Sony puts PlayStation 2 at heart of Net strategy
US 3D graphics specialist GigaPixel this week issued a challenge to the likes of 3dfx, Nvidia, S3 and ATI -- the company claims its GP-1 chip, based on its Giga3D architecture, has rival products well and truly licked on both image quality and performance. What makes GP-1 interesting is its use of a tile-based rendering scheme instead of the traditional polygon approached used by every other mainstream graphics accelerator.