IT'S ALIVE! IT'S ALIVE! Google's secretive Omega tech just like LIVING thing
'Biological' signals ripple through massive cluster management monster
Exclusive One of Google's most advanced data center systems behaves more like a living thing than a tightly controlled provisioning system. This has huge implications for how large clusters of IT resources are going to be managed in the future.
"Emergent" behaviors have been appearing in prototypes of Google's Omega cluster management and application scheduling technology since its inception, and similar behaviors are regularly glimpsed in its "Borg" predecessor, sources familiar with the matter confirmed to The Register.
Emergence is a property of large distributed systems. It can lead to unforeseen behavior arising out of sufficiently large groups of basic entities.
Just as biology emerges from the laws of chemistry; ants give rise to ant colonies; and intersections and traffic lights can bring about cascading traffic jams, so too do the ricocheting complications of vast fields of computers allow data centers to take on a life of their own.
The kind of emergent traits Google's Omega system displays means that the placement and prioritization of some workloads is not entirely predictable by Googlers. And that's a good thing.
"Systems at a certain complexity start demonstrating emergent behavior, and it can be hard to know what to do with it," says Google's cloud chief Peter Magnusson. "When you build these systems you get emergent behavior."
By "emergent behavior", Magnusson is talking about the sometimes unexpected ways in which Omega can provision compute clusters, and how this leads to curious behaviors in the system. The reason this chaos occurs is due to the 10,000-server-plus cluster scale it runs at, and the shared state, optimistic concurrency architecture it uses.
"Systems at a certain complexity start demonstrating emergent behavior, and it can be hard to know what to do with it"
Omega was created to help Google efficiently parcel out resources to its numerous applications. It is unclear whether it has been fully rolled out, but we know that Google is devoting resources to its development and has tested it against very large Google cluster traces to assess its performance.
Omega will handle the management and scheduling of various tasks and places apps onto the best infrastructure for their needs in the time available.
Inside a Google data center ... Where's the software to run the software?
It does this by letting Google developers select a "priority" for an application according to the needs of the job, the expected runtime, its urgency, and uptime requirements. Jobs relating to Google search and ad platforms will get high priorities while batch computing jobs may get lower ones, and so on.
Omega nets together all the computers in a cluster and exposes this sea of hardware to the application layer, where an Omega sub-system arbitrages the priorities' of an innumerable number of tasks then neatly places them on one, ten, a hundred, or even more worker nodes.
"You're on this unstable quicksand all the time and just have to deal with it," Google senior fellow Jeff Dean told The Reg. "Things are changing out from under you fairly readily as the scheduler decides to schedule things or some other guy's job decides to do some more work."
Some of these jobs will have latency requirements, and others could be scattered over larger collections of computers. Given the thousands of tasks Google's systems can run, and the interconnected nature of each individual application, this intricacy breeds a degree of unexpectedness.
"There's a lot of complexity involved, and one of the things that distinguishes companies like Google is the degree to which these kinds of issues are handled," said John Wilkes, who is one of the people at Google tasked with building Omega. "Our goal is to provide predictable behaviors to our users in the face of a huge amount of complexity, changing loads, large scale, failures, and so on."
The efficiencies bought about by Omega means Google can avoid building an entirely new data center, saving it scads and scads of money and engineering time, Wilkes told former Reg man Cade Metz earlier this year.
"Strict enforcement of [cluster-wide] behaviors can be achieved with centralized control, but it is also possible to rely on emergent behaviors to approximate the desired behavior," Google wrote in an academic paper [PDF] that evaluated the performance of Omega against other systems.
By handing off job scheduling and management to Omega and Borg, Google has figured out a way to get the best performance out of its data centers, but this comes with the cost of increased randomness at scale.
"What if the number of workers could be chosen automatically if additional resources were available, so that jobs could complete sooner?" Google wrote in the paper. "Our specialized [Omega] MapReduce scheduler does just this by opportunistically using idle cluster resources to speed up MapReduce jobs. It observes the overall resource utilization in the cluster, predicts the benefits of scaling up current and pending MapReduce jobs, and apportions some fraction of the unused resources across those jobs according to some policy."
This sort of fuzzy chaos represents the new normal for massive infrastructure systems. Just as with other scale-out technologies – such as Hadoop, NoSQL databases, and large machine-learning applications – Google is leading the way in coming up against these problems and having to deal with them.
First in the firing line
Omega matters because soon after Google runs into problems, they trickle down to Facebook, Twitter, eBay, Amazon, and others, and then into general businesses. Google's design approaches tend to crop up in subsequent systems, either through direct influence or independent development.
"You can get very unstable behavior. It's very strange – it behaves like biological systems from time to time"
Omega's predecessor also behaved strangely, Sam Schillace, VP of engineering at Box and former Googler, recalled.
"Borg had its sharp edges but was a very nice service," he told us. "You run a job in Borg at a certain priority level. There's a low band [where] anybody can use as much as they want," he explained, then said there's a production band which has a higher workload priority.
"Too much production band stuff will just fight with each other. You can get very unstable behavior. It's very strange – it behaves like biological systems from time to time," he says. "We'll probably wind up moving in some of those directions – as you get larger you need to get into it."
Though Omega is obscured from end users of Google's myriad services, the company does have plans to use some of its capabilities to deliver new types of cloud services, Magnusson confirmed. The company could use the system as the foundation of spot markets for virtual machines in its Compute Engine cloud, he said.
"Spot markets for VMs is a flavor of trying to adopt that," he said. "To adopt that moving forward [we might] use SLA bin packing. If you have some compute jobs that you don't really care exactly what is done – don't care about losing one percent of the results – that's a fundamentally different compute job. This translates into very different operational requirements and stacks."
Google wants to "move forward in a way so you can represent that to the developer," he said, without giving a date.
Omega's unpredictability is a strength for effectively portioning out workloads, and the chaos that resides within it comes less from its specific methodology, and perhaps more from the way that at scale, in all things, strange behaviors occur – a fact that is both encouraging, and in this hack's mind, humbling.
"When you have multiple constituencies attempting the same goal, you end up with unexpected behaviors," said JR Rivers, the chief of Cumulus Networks and a former Googler. "I would argue that [Omega's unpredictability is] neither the outcome of a large system nor specific to a monolithic stack, but rather the law of unintended consequences."
A mind of its own? It seems that way. Just ask open-source Mesos
Already, researchers at the University of California at Berkeley have taken tips from Google to create their own variant called Apache Mesos, which is an open-source Google Borg clone running at large web properties such as Twitter and Airbnb.
However, Mesos is also exhibiting strange behaviors.
"Depending on a combination of things like weights and priorities there's a potential reallocation of resources across and around these jobs that has a compounding affect that can exaggerate these non-determinisms," said Benjamin Hindman, VP of Apache Mesos.
"For some jobs that are good at dealing with these non-determinisms [Omega's behavior] is totally fine. For some of these jobs it can mean much decreased latency to finish."
As stated, emergence leaps out of scale. So, while some engineers might like to be given a completely deterministic system, this may soon prove to be impossible for sufficiently large data centers.
Instead, applications will need to be built with all the reliability features that big business needs – such as transaction guarantees, distributed locking, and coherence – but must be able to be run in a sufficiently distributed manner on systems like Mesos and Borg that can tolerate failures without disrupting overall reliability.
"There's two directions to go out here - one is to go out to the system and try and eliminate the non-determinism, the other is tell the software there's inherent non-determinism and program around that," Hindman said.
"While I'd love to tell someone 'your interface is a completely deterministic user interface' oftentimes the cost of doing that is so prohibitive you couldn't do it. You might be able to do something like that for a very particular type or class of apps [but] if you do it for one class of app it could have really bad effects on one other class of app."
First to feel the effects ... What Google faces now, Twitter and Facebook will hit soon enough
All applications need to be built to sustain certain failures or slowdowns or obscure latency scenarios, and not fail. Some companies are already doing this, such as Salesforce with its Keystone system.
"Running a million-plus jobs per day – at that point for a given job you might see variation"
The job of a system like Omega, or Borg, or Mesos, or even the revamped MapReduce structure of the YARN resource negotiator in Hadoop version 2, is to hide as much of this as possible from the developer straddling the stack. But some programmers will notice when they deploy it at sufficient scale.
"We've had a lot of experience running YARN at scale now," said Arun Murthy, the founder and architect of Hadoop specialist Hortonworks. "YARN cannot guarantee at scale. We're talking about running a million-plus jobs per day – at that point for a given job you might see variation."
This variation could be the placement of replicas for certain jobs, he said. "Today you might get resources in host 1 and tomorrow in host 82."
By exposing some level of non-determinism to the developer, YARN can give assurances it will make sensible use of compute resources at scale, but on the fringes of sufficiently large clusters, weird things will happen, he admitted.
"It's not an exact science," he says. "What you really need is at very low cost to the end user good performance in the aggregate."
How we learned to stop worrying and embrace chaos
The unpredictable behavior that systems such as Borg, Omega, Mesos, and YARN can display, are a direct result of the number of components within them that all need to jostle for attention.
"My strong belief is that these [emergent properties] manifest in interesting ways in each system as you scale up – I mean, really scale up to 5,000-plus nodes," said Arun Murthy of Hortonworks.
This element of randomness has roots in how we've built low-level components of infrastructure systems in the past.
"There's an emergent behavior that comes out," said Hindman of the Apache Mesos project. "There's all sorts of reasons for that. When it gets to large scale there's a combination of the fact that machine failures now at a large scale can change the property of the job whereas at the smaller scale there wasn't probability of machine failures as much, the second one is there's a lot of other non-determinism in and around the job."
In the past, similar behaviors have been seen in the way garbage collectors work in Java virtual machines, he said. "All of a sudden now you'll get weird things going on like things in the JVM will make those weird behaviors develop... a lot of this stuff starts to creep up at larger scale."
Hindman finds another example in the behavior of any highly concurrent parallel system with numerous cores running hundreds of threads. "You'd see a lot of interestingly similar behaviors. Just based on the Linux thread scheduler, the I/O thread scheduler these types of systems often have a lot of the same non-determinism issues but it's compounded because we have many, many layers of this."
Because systems such as YARN, Omega, Borg, Mesos, and so on, are designed to run thousands and thousands of tasks with vast amounts of network chatter, I/O events, and running apps across time periods that vary from milliseconds to months, the chance of a level of this underlying randomness becoming exposed and having a knock-on effect on high-level tasks is much, much higher.
"At scale everything breaks no matter what you do and you have to deal reasonably cleanly with that and try to hide it from the people actually using your system"
Over the long term, approaches like this will make widely deployed intricate tangles of software much more reliable, because it will force developers to design their apps to effectively deal with the shifting quicksand-like hardware pools that their code lives on top of. By programming applications to be able to deal with failures at this scale, software will become more like biological systems with the redundancy and resiliency that implies.
It reminds us of what Urs Hölzle, Google's senior director of technical infrastructure, remarked a couple of years ago: "At scale everything breaks no matter what you do and you have to deal reasonably cleanly with that and try to hide it from the people actually using your system."
With schedulers such as Borg and Omega, and community contributions from Mesos or YARN, the world is waking up to the problems of scale.
Instead of fighting these non-determinisms and rigidly dictating the behavior of distributed systems, the community has instead created a fleet of tools to coerce this randomness into some semblance of order, and in doing so has figured out a way to turn the randomness and confusion that lurks deep within any large sophisticated data center from a barely seen cloud-downing beast into an asset that focuses apps to be stronger, healthier, and more productive. ®