This article is more than 1 year old

A series of disorderly events

Doomsday Weekend 2: Trevor Pott and the Domain of Fire

Sysadmin Blog On Doomsday Weekend we completely replaced our Windows domain. It was a miserable experience. It’s hard to describe how much work is involved in replacing a mature domain; certainly more than I had anticipated. It's even harder to explain the hell to non-sysadmins.

On the surface, the transition from the old network to the new looks like a series of straightforward and orderly events. The environment in question: There are five domain controllers in four cities. Additionally there is a Microsoft Exchange Server, Office Communications Server, Windows Software Update Services Server, Blackberry Enterprise Server, Teamviewer Manager Server, five Microsoft SQL Servers, three Pervasive SQL servers, a dozen print servers, a dozen file servers and more than two dozen additional application servers running remarkably resource-hungry applications.

On the Linux side there are three file servers and we’re working on two dozen web-facing servers: web sites, DNS, VPN, email filtering and so on. On top of all of this, sixty virtualised desktops, 13 Windows XP workstations, 16 Windows 7 workstations, 45 Wyse clients, eight Blackberry Handhelds and half a dozen assorted laptops. Supporting it all, 20 ESXi servers.

It’s not a big network: by your standards, it may be positively quaint. But every one of the systems, physical and virtual, was modified in some way on Doomsday weekend. They were completely replaced, rebuilt, or disjoined from the old domain and then joined to the new one - accompanied by a massive review of their application loadout. Every user in the organisation was recreated with a new profile and a clean set of folder redirections. Of course, since the usernames changed, we had to pull the mail out of the old domain’s exchange server one PST at a time, and port the homefolder files from the old domain to the new one. There were 60 million files - 8Tb - to move from one file server to the other, and all the services had to be kept fully operational during the move.

We started this misadventure with two sysadmins, a bench tech and two and half days to pull it all off. Had everything gone to plan, this probably would have been doable in the time allotted with a little breathing room to spare.

Of course didn’t go anywhere near to plan. First there were a series of ridiculously poorly timed hardware failures. An ESXi box bought it on one of the remote sites; it didn’t fully lock up (most of the VMs hosted on it were running just fine), but it did freeze when the new domain controller for that site was down for reboot. This had the effect of allowing us access to all of the VMs hosted on that server - except the one we really needed - while being unable to access the system via the vSphere Client.

Xerox didn't deliver our new Workcenter printers, so we didn’t have the units to test the new drivers against. We were installing blind, crossing our fingers and hoping. Monday morning we were talking people on remote sites into giving us MAC addresses for DHCP reservations for the new printers, and working out bugs in the driver configuration, such as wretched default banner sheets.

We also have large industry-specific pieces of manufacturing equipment powered by Windows 2000 Pro workstations, tied together with some interesting widgetry - the most important part of which is a timed shutdown and startup. We thought we disabled them. Sure enough, late Friday night, they all turned themselves off - leaving us scrambling on Monday morning, in a remote location, to convince the servers to join the new network before staff arrived.

I managed 82 hours uptime before I was forced to sleep.

Those are the errors that occurred beyond our control to make life interesting. Next: problems we really should have avoided, but didn’t.

More about

TIP US OFF

Send us news


Other stories you might like