Dumping gear in the public cloud: It's about ease of use, stupid
Look at the numbers - co-location might work out cheaper
How I look at the world
I need a point of reference to design a solution, so I am going with the numbers my heretofore unnamed debate partner supplied. They have about 100 VMs and 50TB of data across 8 nodes with 256 active RAM in their production environment. Let's say that I am intent on reproducing the full capacity offsite.
Racks, racks everywhere!
I can build a Supermicro 6047R-E1R36N server with 384GB of RAM, 136TB of raw storage and a pair of 8 core CPUs for less than $40k. I typically include a pair of Intel S3700 SSDs and an LSI controller for that price. This gives me lots of horsepower to play with.
Supermicro servers have the ability to do "advanced" or "mirrored" ECC memory support, allowing me to functionally RAID 1 the RAM. So this provides me a server with 192GB of usable RAM on the storage server that is absolutely rock solid. I get roughly 100TB of RAID 6 or 63TB of RAID 10 storage out of that configuration. Throw the Intel SSDs at the LSI controller and you have "hybrid" storage that caches the writes to the SSDs when more IOPS than the underlying storage is required.
If you need more compute capacity you can get it cheaply using Supermicro's Twin series servers. A 2U Twin will get you 4 nodes in 2U. A FatTwin will get you 8 nodes in 4U. Diskless, and with about 256GB of RAM I get these systems for about $5000 per node.
We need 8 nodes, so that will run us $40k. If you don't need fancy networking capabilities you can buy a Netgear 24-port 10GbE switch for about $5000, or pick up a Supermicro SSE-X24S for about $7500 if you need a few more nerd knobs. Buy two for redundancy.
If you're hyper-paranoid about your data storage and you absolutely require that your DR site be protected by more than just RAID, you can duplicate the storage server and toss on $15k for Starwind's HA SAN software.
To recap, that's 8 nodes of compute with 256GB of RAM each, running on a literally bulletproof 100TB usable RAID 6 + RAIN 1 storage setup all lashed together with 10GbE for around $150k.
It's only a little over $100k if you're cool with the storage using only RAID (instead of RAID + RAIN) for redundancy, and $40k if you just need a great big box of offsite storage and don't need the compute capacity.
None of that covers the cost of operating systems on the units and the reason for that is both simple and complex. It is simple in that disaster recover licensing is miserable whether or not you use a cloud vendor or handle it yourself.
Each vendor has a different take. Some allow you a "free pass" for the instances you keep in the disaster recovery site, so long as those instances are for disaster recovery purposes only. Some vendors insist that you have a full suite of licences for both cases.
Microsoft licences purchased with Software Assurance, for example, provide rights for "cold" backups in such a scenario. They do not, however, cover live failover licences. Microsoft has even designed the rights in such a manner that to be fully compliant in a cold failover scenario, your workloads must then be active in that DR environment for 90 days – unless, of course, you subscribe to software assurance.
The licences for the underlying infrastructure – the file servers, the hypervisor, etc – are equally complex. You can do the whole thing for free with KVM/Openstack. There's also the possibility that your particular DR software and methodology can failover some or all of the configuration and management software – and the licences with it – which may (or may not) reduce your licensing burden.
When you use the cloud for DR, all of this is a problem there too, though which licences you'll have to pay for, and which are incorporated into the fees themselves are different. What you pay in total corporate licensing also determines your volume licensing position with various vendors, which also has an impact.
Sponsored: Network DDoS protection