Windows Server 2012: Smarter, stronger, frustrating
Perfect upgrade for punters with a passion for the obscure
Review Microsoft has released Windows Server 2012, based on the same core code as Windows 8. Yes, it has the same Start screen in place of the Start menu, but that is of little importance, particularly since Microsoft is pushing the idea of installing the Server Core edition – which has no Graphical User Interface. If you do install a GUI, Server 2012 even boots into the desktop by default.
This is a big release. The server team had no need to reimagine Windows, giving them a clear run to focus on product features, not least the attempt to catch up with VMware in the virtualisation stakes with a greatly updated Hyper-V. The list of what’s new is long and tedious, but what is most significant is the way Windows Server is evolving away from its origins as a server variant of a monolithic GUI operating system.
Two key features that underpin Server 2012 are modularity and automation. Neither is yet perfect, but this release is where they start to look convincing. Evidence of modularity is that you can now move between Server Core, which has only a command prompt, and the full GUI edition by adding and removing features, whereas before you would have to reinstall.
There are still some odd dependencies. If you add the Application Server role to a Core installation, it requires the GUI management tools to be installed, for example. Still, improved modularity is important since it means installing only what you need, which is good for both performance and security.
Progress in automation is even more noticeable. It may be significant that the lead architect for Windows Server is Jeffrey Snover, who is also the inventor of PowerShell, Microsoft’s scripting platform for Windows administration based on the .NET Framework. PowerShell has hundreds of new Cmdlets (installable PowerShell commands), is designed to run remotely, and has a new workflow engine. There is now a full set of Cmdlets for Hyper-V.
PowerShell History shows you scripts generated by actions in the GUI
The new Server Manager is in many cases a wrapper for PowerShell, something that will be familiar to Exchange 2010 administrators. Better still, the Active Directory Administrative Center has a PowerShell History pane that shows you the script generated by your actions in the GUI, so that you can copy and modify for future actions. The PowerShell editor, the Integrated Scripting Environment, now supports collapsible regions and IntelliSense code completion.
Server Manager itself is completely redone in this release. It is now a tool for managing multiple servers, and you can view your server infrastructure by role as well as by server. The idea of the Metro-inspired dashboard is that green means good, while red demands attention. From the Server Manager, you can easily view the event logs and performance data for each server, as well as accessing all the management and configuration tools such as adding and removing features, services, device manager, storage management, PowerShell prompt, and, if you need it, remote desktop.
Green is good, red means trouble: the Server Manager
This is great stuff, but in practice old Windows enemies can still haunt the administration experience. I set up three instances of Server 2012 in a domain for testing: one physical and two virtual. One of these servers gives an error when added to Server Manager, filling it with red blotches. The error is “Cannot get event data,” and I wasted some time trying to find the reason for the problem. It is related to a DCOM (Distributed COM) error 2147944122. The detail of this is supremely unimportant; the point is that Windows administrators spend too much time investigating obscurities like this when they would rather be using lovely GUI management tools.
That said, most of the operations I tried with the RTM (Release to Manufacture) build of Server 2012 have worked exactly as advertised.
Storage Spaces is a new way to manage hard drives, aimed at smaller organisations who lack the luxury of a Storage Area Network (SAN). The feature lets you define a storage pool across several physical drives, and then create virtual disks within the pool. A virtual disk can be resilient, supporting either mirroring – where each disk is duplicated – or parity striping, which is more efficient but requires three or more drives.
Next page: Space saving data de-duplication
"Only really old UNIX legacy based things like Linux are monolithic these days."
You mean those old unix legacy things that MS has been desperatly being playing catch up with ever since it released NT? When - *gasp*- Windows went 32 bit protected mode & multi user. Not simultanious mult user mind, that had to wait. Along with proper remote login. And networked graphics. And then after years of being told the GUI is all you need they finally catch the clue train and come up with PowerShell. An oxymoron when you compare the unix shells but better than nothing. Now we have TA DAA! - Server Core! - Wow! An OS that can be run without a GUI - I assume - remotely. Now where have I seen that before... Naturally it won't be via ssh - that would be too easy and standard for MS. No doubt it will be some overcomplicated roll-your-own solution probably involving some GUI-for-idiots on the client.
Monolithic kernel isn't necessarily bad
Just as much as a microkernel isn't necessarily good. Even dear "AST" would admit this today, even after telling Mr Torvalds that he wouldn't receive many marks for a monolithic kernel submitted as an assignment. :-)
They're different ways of tackling the same problem. There are advantages in both. Performance is one disadvantage of the microkernel model it took Microsoft quite some time to get their "layered" kernel right. The earlier versions of Windows NT weren't exactly high performers, Windows NT 4 lumbered along a bit... Windows 2000 was better. Then they started piling on the rubbish in Windows XP and Vista. I observe some of this rubbish is noticable by its absence in Windows 8.
Portability is one of the strengths of a microkernel. It's therefore ironic that Windows NT, being largely microkernel-based, runs on so few platforms, compared to Linux which is as you rightly point out, monolithic. Windows NT did run on more, but I suppose they decided it wasn't worth persuing the others. Does make you wonder what it'd look like had they decided to keep an ARM port of Windows NT going though.
Where Windows is considered "monolithic" is more to do with the fact that the user land and front-end seems to be conjoined at the hip with the back-end kernel. I can take a Ubuntu Linux desktop, and completely strip away the GUI environment leaving only the command line. Indeed, I did this very act today.
Try that with Windows XP, or 7, or Server 2008. No dice, the GUI and kernel are inseparably linked. Same with MacOS X, although MacOS X without its GUI is essentially Darwin, so probably doable, just not obvious. Windows has been that way since NT was first released. Consumer Windows has been like this since Windows 95.
The fact that Microsoft are recognising this as a limitation of their platform however, and are now taking steps to remedy this however, I can say is a good thing. Now clean up the POSIX layer a bit, and we might even have a decent VMS-like Unix clone that will make running applications designed for Linux a lot easier.
NT started off as a nice micro kernel but that all got tossed out of the pram when they integrated the GUI stuff into the kernel. It has taken them eons to fix that one. Look at QNX for a micro kernel and OS that would do Tannenbaum proud.