Feeds

A series of disorderly events

Doomsday Weekend 2: Trevor Pott and the Domain of Fire

  • alert
  • submit to reddit

Internet Security Threat Report 2014

Sysadmin Blog On Doomsday Weekend we completely replaced our Windows domain. It was a miserable experience. It’s hard to describe how much work is involved in replacing a mature domain; certainly more than I had anticipated. It's even harder to explain the hell to non-sysadmins.

On the surface, the transition from the old network to the new looks like a series of straightforward and orderly events. The environment in question: There are five domain controllers in four cities. Additionally there is a Microsoft Exchange Server, Office Communications Server, Windows Software Update Services Server, Blackberry Enterprise Server, Teamviewer Manager Server, five Microsoft SQL Servers, three Pervasive SQL servers, a dozen print servers, a dozen file servers and more than two dozen additional application servers running remarkably resource-hungry applications.

On the Linux side there are three file servers and we’re working on two dozen web-facing servers: web sites, DNS, VPN, email filtering and so on. On top of all of this, sixty virtualised desktops, 13 Windows XP workstations, 16 Windows 7 workstations, 45 Wyse clients, eight Blackberry Handhelds and half a dozen assorted laptops. Supporting it all, 20 ESXi servers.

It’s not a big network: by your standards, it may be positively quaint. But every one of the systems, physical and virtual, was modified in some way on Doomsday weekend. They were completely replaced, rebuilt, or disjoined from the old domain and then joined to the new one - accompanied by a massive review of their application loadout. Every user in the organisation was recreated with a new profile and a clean set of folder redirections. Of course, since the usernames changed, we had to pull the mail out of the old domain’s exchange server one PST at a time, and port the homefolder files from the old domain to the new one. There were 60 million files - 8Tb - to move from one file server to the other, and all the services had to be kept fully operational during the move.

We started this misadventure with two sysadmins, a bench tech and two and half days to pull it all off. Had everything gone to plan, this probably would have been doable in the time allotted with a little breathing room to spare.

Of course didn’t go anywhere near to plan. First there were a series of ridiculously poorly timed hardware failures. An ESXi box bought it on one of the remote sites; it didn’t fully lock up (most of the VMs hosted on it were running just fine), but it did freeze when the new domain controller for that site was down for reboot. This had the effect of allowing us access to all of the VMs hosted on that server - except the one we really needed - while being unable to access the system via the vSphere Client.

Xerox didn't deliver our new Workcenter printers, so we didn’t have the units to test the new drivers against. We were installing blind, crossing our fingers and hoping. Monday morning we were talking people on remote sites into giving us MAC addresses for DHCP reservations for the new printers, and working out bugs in the driver configuration, such as wretched default banner sheets.

We also have large industry-specific pieces of manufacturing equipment powered by Windows 2000 Pro workstations, tied together with some interesting widgetry - the most important part of which is a timed shutdown and startup. We thought we disabled them. Sure enough, late Friday night, they all turned themselves off - leaving us scrambling on Monday morning, in a remote location, to convince the servers to join the new network before staff arrived.

I managed 82 hours uptime before I was forced to sleep.

Those are the errors that occurred beyond our control to make life interesting. Next: problems we really should have avoided, but didn’t.

Top 5 reasons to deploy VMware with Tegile

More from The Register

next story
NSA SOURCE CODE LEAK: Information slurp tools to appear online
Now you can run your own intelligence agency
Azure TITSUP caused by INFINITE LOOP
Fat fingered geo-block kept Aussies in the dark
NASA launches new climate model at SC14
75 days of supercomputing later ...
Yahoo! blames! MONSTER! email! OUTAGE! on! CUT! CABLE! bungle!
Weekend woe for BT as telco struggles to restore service
Cloud unicorns are extinct so DiData cloud mess was YOUR fault
Applications need to be built to handle TITSUP incidents
BOFH: WHERE did this 'fax-enabled' printer UPGRADE come from?
Don't worry about that cable, it's part of the config
Stop the IoT revolution! We need to figure out packet sizes first
Researchers test 802.15.4 and find we know nuh-think! about large scale sensor network ops
SanDisk vows: We'll have a 16TB SSD WHOPPER by 2016
Flash WORM has a serious use for archived photos and videos
Astro-boffins start opening universe simulation data
Got a supercomputer? Want to simulate a universe? Here you go
prev story

Whitepapers

Go beyond APM with real-time IT operations analytics
How IT operations teams can harness the wealth of wire data already flowing through their environment for real-time operational intelligence.
10 threats to successful enterprise endpoint backup
10 threats to a successful backup including issues with BYOD, slow backups and ineffective security.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Website security in corporate America
Find out how you rank among other IT managers testing your website's vulnerabilities.