Gelsinger stuns analysts and colleagues with storage pool plan
EMC will sync all your data, worldwide
Comment EMC's Pat Gelsinger is proposing unified storage/server systems that span the globe and function as a single virtual resource pool, using YottaYotta technology.
The details, such as they are, are in an EMC webcast presented at an analyst briefing event last week. It asserts that EMC will be able to overcome limitations of latency, bandwidth and cache coherency such that many people, based in LA, London and Hong Kong - wherever - could access the same data at the same time and update it without the master data getting out of sync.
No one would have to wait too long for the first sight of the data or to access its full content. The caching system controls would ensure that the overall system never lost track of which version of which piece of block data was correct.
It's a small job - one of those things that have never been done before but which would be nice to do, like cold fusion, flying to Mars or staying underwater without breathing apparatus for an hour. Have Gelsinger and EMC lost their marbles?
When a person in London looks at data resident on a disk drive in Sydney, Australia, it takes seconds for the data to start arriving - the global distance network latency problem. Even at the speed of light data travels only so fast. Then the rest of the data takes time to arrive. If you want to stream a movie from Sydney to London you need a large pipe - a very large and very expensive network pipe. This is the bandwidth problem.
Finally, if one person in London is reading the file and modifying it at the same time as another person in Montreal and a third in San Francisco then their updates are first stored in their local systems and these caches have to be kept coherent. This is the cache coherency issue.
Gelsinger is EMC's President and COO for EMC Information Infrastructure Products. His pitch about this DaaD (Data at a Distance) scheme is not available online yet - it's being prepared by EMC. Chuck Hollis, EMC's master blogger on such topics, writes:
We are all well aware of the issues around latency, bandwidth and consistency - and we architect our solutions around these traditional obstacles. Indeed, it affects so much of our IT thinking that overcoming traditional perceptions will be a serious obstacle to any new enabling technology in this category.
But without overcoming distance - and I mean more than few dozen kilometers - none of this is particularly interesting as a strategic enabling technology ... We propose nothing less than breaking this barrier in a fundamental and meaningful way.
EMC's DaaD scheme relies on federating unified and virtualised server and storage systems at a global scale, first with local federations and then federating these globally. It's proposing sticking a Yotta Yotta technology-based appliance in front of each local federation and connecting them to build the global scheme.
YottaYotta was a Canadian startup building a distributed caching technology for WAN-based storage infrastructures. NetStorage GSX boxes sat in front of local storage resources, connecting to them via InfiniBand or GigE, and were clustered across WAN links to make a collection of local block storage resources look like a single resource.
The idea was to make access to data held hundreds or thousands of miles away appear to be local. EMC invested in YottaYotta in 2006 and then bought some of its IP and hired some its people when it collapsed in 2008.
What appears to be happening is that when a user accesses a file in this DaaD scheme it is brought to him and held locally, cached. He then makes a change to it, and the change is propagated back to the master or golden copy of the file. That means the master file has changed and everyone else simultaneously accessing the file needs to get the changed data.
That logically means that other locally cached copies have to be marked so that the area of data holding the changed blocks has to tagged such that accesses to that data cause a refresh from the master copy.
There is a lot of data access control work going on here and a lot of cache status messages flying around the network. The system has to track which copies of which master files are where, realise when updates occur locally, time them using a global clock, collect updates back at the master copy and instantly propagate the facts of the update to all other nodes holding copies of that master file. For example: blocks 121,000 to 123,000 of file X have changed.
It has to do this distributed cache coherence on a global scale for possibly billions of files and millions of users with master copies held all over the world.
Sponsored: 2016 Cyberthreat defense report