Feeds

Spotting the ‘mainframe killer’ spin in Windows DataCenter

Semioticians, take note...

  • alert
  • submit to reddit

Beginner's guide to SSL certificates

In contrast to last year's WinHEC conference, which bulged with details of Microsoft's DataCenter version of Windows 2000, Microsoft nowadays seems relieved to get through a tricky showpiece conference without too much attention being drawn to its erstwhile Unix-killer. And this low-key strategy has been followed to the letter with its low-key presence at this week's N+I show, too. Readers with long memories will remember how the all-singing, all-dancing, all-clustering DataCenter edition would appear "within 90 days" of the main Windows 2000 release. But readers with even longer memories – stretching back to, say, 1996 – will recall the Wolfpack initiative. In Wolfpack, Compaq, Intel Microsoft and chums agreed to produce high availability, failover clusters by 1998. This architecture, a shared-nothing design which was supposed to be easier to implement, and easier to scale than the shared-everything DEC VAXcluster approach, would give us eight-way failover by 1998. And hey, we've got the PowerPoint presentation to prove it. Alas, 1998 came and went, and when Wolfpack appeared, it just about manage to failover SQL Server over two nodes given around 40 minutes, and Exchange not at all. Which is hardly the stuff of mission critical computing. Since then Microsoft has given us all manner of technology demonstrations featuring cluster-like features – load balancing and node-to-node failover – on all kinds of non-mission critical applications... like serving up Web pages. But Web clustering isn't quite data centre clustering, however you spell it, as any DEC or Tandem reader will confirm. Go steady on Microsoft though – it's one of the industry's big lies – and one happily shared by almost any Linux "cluster" offering on the market at the moment too. Clustering as DEC defined it, and as high-availability clusters such as IBM's HACMP define it too, guarantees some kinda transactional integrity. In other words, transactions don't get lost. Conveniently, today's fault-tolerant Web clustering packages quite shiftily redefine a transaction as any Web page request, rather a valuable web page request which might be transmitting a monetary transaction. Prod them to guarantee that, and the 'guarantees' start to get a bit equivocal. And this ambiguity provided a get-out clause, of sorts, for Redmond. Microsoft rapidly reoriented its server strategy around what IDC calls functional servers, performing not-quite mission critical jobs such as file sharing, print sharing and hang-on-call-BOFH email. Redmond has also worked pretty hard, and done a pretty good job we think – given the slow progress in providing commodity hardware – of inching up the SMP scalability path. Although on that count, it's still knocking at the back door for the kind of really nice, linear SMP scalability curves that HP, DEC and SGI have been producing for some years. So here’s what to expect. For performance figures, look out for a few big showpiece TPC-C benchmarks. TPC-C is no longer considered a reliable benchmark by the Transactional Processing Council, so expect a lot of use and misuse RSN. Microsoft has already demonstrated COM+ and print queues miraculously "failing over" recently, so we can confidently predict more of the same. This doesn't exactly defy the laws of physics in any sense, but sure gives good demo. But for further enlightenment look no further than the great Jim Gray's pre-Xmas paper for a redefined taxonomy of Microsoft clusters. Uncle Jim, who shares a bathroom with non other than the Department of Justice at his office at Microsoft's Bay Area Research Centre, should need no introduction: he helped create the first RDMS at IBM, shared-nothing clustering at Tandem, and devised the TPC benchmarks, and has been proselytising NT since he joiined Microsoft in 1995. Tasked with making a some kinda purse out of the Wolfpack sow’s arse, Jim's come up with a new taxonomy of clustering, here or here. Still, Microsoft still scored a minor victory of sorts yesterday, albeit in PR terms, in that it managed to get Windows 2000 DataCenter Edition described as a 'mainframe killer'. It wasn't too long ago that Microsoft was presenting NT as a Unix killer – only without the scalability, reliability or security. Now it's a mainframe – only without the logical partitioning, I/O, or fault-tolerance. But the spinning's working - some outfits who should no better are already parroting this nonsense. ®

Choosing a cloud hosting partner with confidence

More from The Register

next story
MI6 oversight report on Lee Rigby murder: US web giants offer 'safe haven for TERRORISM'
PM urged to 'prioritise issue' after Facebook hindsight find
Assange™ slumps back on Ecuador's sofa after detention appeal binned
Swedish court rules there's 'great risk' WikiLeaker will dodge prosecution
NSA mass spying reform KILLED by US Senators
Democrats needed just TWO more votes to keep alive bill reining in some surveillance
'Internet Freedom Panel' to keep web overlord ICANN out of Russian hands – new proposal
Come back with our internet! cries Republican drawing up bill
What a Mesa: Apple vows to re-use titsup GT sapphire glass plant
Commits to American manufacturing ... of secret tech
prev story

Whitepapers

Why cloud backup?
Combining the latest advancements in disk-based backup with secure, integrated, cloud technologies offer organizations fast and assured recovery of their critical enterprise data.
Getting started with customer-focused identity management
Learn why identity is a fundamental requirement to digital growth, and how without it there is no way to identify and engage customers in a meaningful way.
Go beyond APM with real-time IT operations analytics
How IT operations teams can harness the wealth of wire data already flowing through their environment for real-time operational intelligence.
Why CIOs should rethink endpoint data protection in the age of mobility
Assessing trends in data protection, specifically with respect to mobile devices, BYOD, and remote employees.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?