This article is more than 1 year old

Data trashed? When RPO 0 isn't enough

Cast-iron storage policies

World Backup Day came and went – did you notice? It seems the only thing we've learned is that everyone wants Recovery Point Objectives (RPOs) of 0. Unfortunately, aggressive RPO targets are hard. They affect the design of real world environments, and are sometimes not possible.

And RPO of 0 means "no data loss". With RPO 0, if a tornado touches down and obliterates your data center, not one block of data a production workload was writing is lost. This seemingly miraculous level of data protection is possible, within certain limits.

RPO 0 is expensive. So expensive, in fact, that RPO 0 will place significant constraints on how you can design your data center. Even more frustrating is that all the modern technologies at our disposal make the design process even more ambiguous, not less.

The speed of light is stupid

The long story short on why RPO 0 is expensive is that every single write you make in your primary data center needs to be made in your secondary data center. In a perfect world, whatever you're using for storage wouldn't even acknowledge the write to the operating system (and hence to the application) until the write had been made to both devices.

This seems pretty straightforward from a design standpoint. If you have two sites then you string some fibre between them and set up a storage cluster between devices on both sites. The cluster won't recognize writes unless those writes are made to all members, or at least N-1 members, if one has been ejected for some reason. (In this case you'd probably want at least three devices.)

The speed of light tends to get in the way here. The further away you physically place members of the storage cluster the more latency you introduce into your write operations. Typically, this is where you hear the term "metro area clustering" thrown around. If you have two data centers in the same city you might be able to achieve RPO 0 without the latency becoming too much of an issue.

That has its own limits. You're not going to RPO 0 a ridiculous all-3D Xpoint array made out of low-microseconds latency and locker-room genital comparisons. I have no idea what you could possibly need such a monstrosity for in the first place, but you cannae change the laws of physics, captain.

You can go that fast when you're snuggled up right next to something, but you're going to hit the speed of light going across a room with those things, let alone a city. The closest you're going to get with those is Pseudo RPO 0.

This is all, of course, assuming that you can afford the fibre connection. You don't normally need to own your own second site – there are colocation providers practically everywhere these days – but you need that big fat pipe to pull this sort of thing off. If you can't afford the fibre connection, then you can't even afford the entry fee and now we're also into pseudo RPO 0 territory.

Pseudo RPO 0

Pseudo RPO 0 is where you have a really fast widget installed in the local data center that spools data to a secondary location. The term marketers hate is "cloud storage gateway". Sometimes it's inaccurate; you're buffering writes to a secondary site instead of a cloud. Increasingly, however, it's about writing things to the public cloud.

The gateway serves two purposes. The first is to acknowledge writes right away so as to reduce latency in the storage system. The second is to smooth out write bursts. If you hit it with a whole bunch of writes it simply absorbs them – today, most likely into a flash tier – and then spools them out over the WAN link to the remote site. In this manner it can flatten the link all day long and you can more or less get away with sizing for average load, not the peaks.

There is a downside to cloud storage gateways: your data doesn't truly exist until it's written to the other side. The tornado that flattens your data center will flatten your gateway too. Any data sitting in its buffer never makes it to the other side.

This can be somewhat mitigated by something like an ioSafe gateway that can survive fire and flood (and, if they have flash inside, quite a physical pounding) without losing data.

Combine that with high capacity SSDs and that's enough for a modest cloud storage gateway, but one with real-world limits. You're not going to absorb a 3D-Xpoint array's worth of writes without massacring the latency. Also: a tornado can still carry one of these things off, though the part where you can bolt it to the floor helps some.

It is for this reason that offsite backups are still probably a good idea. Unless you live in a place free of disasters then the axiom "If your data doesn't exist in at least two places, then it doesn't exist" still applies.

Your Recovery Time Objective (RTO) affects things here too. Even if your ioSafe survives the disaster and you bring the drives online, allowing them to spool out its data, it can be days or weeks before you can dig the thing out of the rubble. For some organizations, that is acceptable; not losing data is all that matters. For many companies, however, they would rather lose a little bit of data if it meant being back online faster.

You can't plan for everything

Metro area clusters aren't the holy grail to RPO 0. Cities aren't that big. The tornado, earthquake, tsunami, or what-have-you that obliterates the primary datacenter can also take out the secondary, if it's close enough. Most companies that can afford to use metro clusters to achieve RPO 0 also spool that data to a third site that's farther away in order to deal with this possibility. This third site won't have RPO 0, but with the right gateway technologies can still come pretty close.

What's becoming clear, however, is that we're reaching the limits of what technology can do for us in this area. We are constantly making newer, faster, and more outrageous storage technology. But we've already hit the limits on how fast (latency-wise) we can get that data out of harm's way.

We've tried building physically resilient storage, but there are limits to the punishment that can absorb as well. (Also, there's a heat dissipation problem that means cramming a 3D-Xpoint storage system into an ioSafe is rather unlikely.)

In the real world, this leaves businesses making hard choices about what applications need what RPOs and at what RTOs. It's why backups are hard, and why needs assessment for them is always miserable. It also means that the next frontier in storage isn't going to be "more speed", but "more intelligence".

The next great problem to be conquered is building automated systems that can make decisions about what data to back up to what devices/sites in what order: automated prioritization based on an actual understanding of the content of the writes being made. I'm sure "machine learning" and "artificial intelligence" can be worked in there as buzzwords.

The amount of data under management is constantly growing. As we automate so much of the rest of our data centers, backups and disaster recovery have sadly stagnated. This market is due for "disruption." I can't wait. ®

More about

TIP US OFF

Send us news


Other stories you might like