Original URL: http://www.theregister.co.uk/2011/05/11/local_vs_global_cloud/

What Carthage tells us about Amazon, Fukushima and the cloud

Rubbing salt in the wounds

By Tim Worstall

Posted in Cloud, 11th May 2011 11:26 GMT

Comment Sometimes, the Anglo Saxon parts of our language, rich though they are in epithets, insults and methods of swearing, simply aren't enough to allow one to express the complete and total lunacy of some people out there.

In such cases new words are required, say, "McKibben". The need for that particular one is because a certain Bill of that ilk suggested that the solution to the Japanese earthquake and tsunami was that everyone should eat local food. Our language may indeed be replete with words for crass stupidity, but none are really sufficient to condemn such nonsense.

No, I'm not talking about the light dusting of radiation that the spinach got, nothing to do with matters nuclear: rather that in the long term the greatest environmental damages from those events are almost certainly going to be from the tsunami. Not because of the way in which it roared in from the ocean, its height or because of those it killed along the way. No, it is because of the fact that the giant wave was made of salt water. And yes, salt water does destroy farmland; that's why the Romans ploughed the fields of Carthage with salt, so that nothing would grow there again.

So what our McKibben has just suggested is that the solution to a natural disaster is that everyone should die: for that's what will happen when 30 feet of salt water flows across the fields you were hoping to grow next year's crop on. If you're going to be all Green and rely upon local foods that is.

Sadly the Grauniad's only one of the more extreme examples of the ignorance one can find in corners of the collective psyche environmental. There's not just this generally held view that we should all only eat from our immediately surrounding area, thus being set up for starvation by any passing flood, drought, blight or plague of locusts, but a real and potentially terminal ignorance of the findings of economics over the centuries.

Famines, arbitrage and stable food-supply systems

No, I'm not talking about interest rates, banking systems or even the patriarchal tendencies of capitalism. I mean the real stuff: we can track back and see the decline in famines across Europe as transport systems improved.

We've even got cute methods of working out how much transport methods improved food security. If the price of wheat in Amsterdam changes with that in London then we're pretty sure that people are trading wheat between the two in a process called arbitrage. Similarly, Warsaw and Moscow (although there's not much trade between the two actually) and as time passes, we find New York, then Chicago, even, in the 1890s, Odessa, wheat prices converging upon one global price as transport technologies advance. And yes, famine does decrease from being a local but common phenomenon to something extraordinarily rare (well, until the Soviets but that's another matter).

The lesson from this is that a reliable, safe, food supply system is one which has a multitude of suppliers from different geographic regions, enabling the substitution of some fraction of the supply subject to weather or pests to be replaced with minimal disruption from an area without. That is that, far from Our McKibben's suggestion that we should be locavores, we should be globavores in order to ensure a sufficient amount to eat all the time. Thus: being able to avoid trying to survive on lightly-glowing crops that won't grow in salt-logged land if bad things do happen.

One of the joys of this economics stuff though is that you can abstract general rules from such specific examples. It's been a particular such joy to watch the US military realise that the magnets they put into the missiles they send to Taiwan to protect the island against the mainland actually come from the mainland ... the entire rare earths magnet industry having migrated there over the past couple of decades. Thus the rather unseemly scramble to get a production system built again outside the Middle Kingdom so as to have diverse supply, not supply that depends on the very people you might be lobbing the bombs at.

The cloud connection: diversifying supply

So also with this cloud computing thing: the idea that you might send the tough stuff off to be done by experts outside your own organisation is just another, different, economic theory in action. By dividing labour, we make it possible for the labourer to specialise and specialists tend to be (although are not necessarily so) more productive at whatever the task is.

So if we slice and dice matters into those who look after data (“them”) and those who use and manipulate data (“us”) we should all be able to get more done: straight Adam Smith that is. We could even add some Ricardo to work out who should be doing which part, the playing out of comparative advantage. Perhaps the car company should concentrate on the car bit, and the computer company on the computing bit?

Such simplistic economic thinking does in fact work, just like simplistic anything else – right up until the point that it doesn't. For as we found out a couple of weeks back, Amazon's version of cloud computing doesn't seem to do quite what was advertised. The bits that were supposedly insulated from each other weren't so insulated. Some data was lost.

We even found out that the outage originated from very much the same cause as the Chernobyl disaster: human error. In this case, instead of turning off every safety measure possible and then having a play, they simply turned their datahose at the wrong router, one unable to deal with the volume.

Does this mean that division and specialisation are silly? That cloud computing is now a dead duck? No, most certainly not: what it does mean though is that we need to ensure a few more of those universal truths uncovered by the dismal science are added to our understanding and manipulation of the technology. In this case, diversity of supply, as advocated here.

A little redundancy never hurt anyone

True, that's what a lot of people already thought they were buying with this cloud stuff: vast numbers of servers in different locations, different power supplies, multiple points of access on and off t'internet and all those good things. And indeed, they were getting those things. But they were getting them with the bottleneck of one access point on and off that system: that bottleneck being Amazon's management of all that multitude of kit. And if you accept that Murphy's Law also applies to economics (which I most certainly would), then you would have made the prediction that as that was the only place where the grand plan could go wrong, that's where it would go wrong: as it indeed it did in that case.

Which leads us to the lesson that while we might want to have redundancy at the level of components in the system, at the memory or processor chip level, at the server, at the router or route level, so we might also want to have it at the entire system level. Multiple suppliers of cloud computing to provide for the possibility that any one of them might fall over flat perhaps?

It would be nice to think that this has just opened up another level of integration for some bright sparks: advising people on how to manage a cloud of clouds, but of course that management itself would then become the one bottleneck that could, and therefore would, go wrong. Or are we getting to silly levels of recursion now?

Let's move the analogy to something even older than economics, to agriculture. Over the millennia that have passed since we invented the idea, we've moved our system resilience. Where we once had multiple crops on the same farm – betting that not all will be wiped out by passing troubles – humans later increased food fragility at the local level through monocultures, and then further on in history, increased our famine resilience with geographic dispersion. The principle does seem to work well enough for food: the risks of someone starving when they're plugged into the global system are the lowest they've ever been.

Cloud computing needs to find the correct balance of risks, and will find this through the usual market-based experimentation. Will multiple components make systems sufficiently resilient? Or will multiple systems be necessary to provide that desired level of redundancy? ®