Feeds

Facebook tops up Apache Project graph database with fresh code

'You know what's cooler than a billion edges? A trillion edges!'

Choosing a cloud hosting partner with confidence

Facebook has shoved code back into the trunk branch of Giraph, an open source graph-processing Apache project that mimics Google's advanced "Pregel" system.

The upgrades let Giraph process graphs with trillions of edges – the connections between entities in a graph database – and were announced by the company in a blog post on Wednesday, in which engineers also explained why they chose to bring Giraph into the social network's software ecosystem, and how they added to it to let it deal with larger graphs in a less memory intensive way.

Giraph is an implementation of Google's Pregel database, which the Chocolate Factory built to let it mine its vast array of datapoints and spot valuable interconnections. The company published information on Pregel in June, 2009.

Facebook uses Giraph to help it analyse its massive social network, and decided to upgrade its technology in the summer of 2012. By analyzing the data contained in the connections between its peons users and brands and groups, Facebook can almost certainly develop better tools to offer its advertisers.

"Analyzing these real world graphs at the scale of hundreds of billions or even a trillion (10^12) edges with available software was impossible last year. We needed a programming framework to express a wide range of graph algorithms in a simple way and scale them to massive datasets. After the improvements described in this article, Apache Giraph provided the solution to our requirements," the engineers wrote.

The company also evaluated Apache Hive, GraphLab, and Apache Giraph, but plumped for Giraph due to the fact it runs as a MapReduce job, and is written in Java and so can interface well with Facebook's Java stack.

The main contribution Facebook made to the technology was the implementation of multi-threading, which improves the performance of Giraph.

"When Giraph takes all the task slots on a machine in a homogenous cluster, it can mitigate issues of different resource availabilities for different workers (slowest worker problem)," the company wrote. "For these reasons, we added multithreading to loading the graph, computation (GIRAPH-374), and storing the computed results (GIRAPH-615)."

By implementing multithreading, the company has seen linear speed up in some CPU bound applications.

The company has also reduced the overall memory footprint of the system, which in earlier iterations was a "memory behemoth".

It achieves this by serializing vertexs into a byte array rather than a java object, and serializing messages on the server. By doing this the company also gained a predictable memory model for vertexes, which let it better figure out resource consumption by the tech.

"Given that there are typically many more edges than vertices, we can roughly estimate the required memory usage for loading the graph based entirely on the edges. We simply count the number of bytes per edge, multiply by the total number of edges in the graph, and then multiply by around 1.5x to take into account memory fragmentation and inexact byte array sizes."

The company also made enhancement to the aggregator architecture of the technology to remove bottlenecks that had formed when processing large amounts of data.

These improvements have dramatically improved the performance of Giraph, Facebook says, allowing it to run an iteration of page rank on a one trillion-edge social graph – the largest test Giraph has ever undergone.

"The largest reported real-world benchmarked problem sizes to our knowledge are the Twitter graph with 1.5 billion edges... and the Yahoo! Altavista graph with 6.6 billion edges; our report of performance and scalability on a 1 trillion edge social graph is 2 orders of magnitude beyond that scale."

Few companies have to deal with graphs with trillions (or even billions) of edges for now, but as technologies like the internet of things are deployed widely and seas of sensors start beaming data into massive data stores, the tech will become increasingly relevant to organizations other than social networks, ad slingers (Google), and ecommerce shops (Amazon). ®

Secure remote control for conventional and virtual desktops

More from The Register

next story
Nexus 7 fandroids tell of salty taste after sucking on Google's Lollipop
Web giant looking into why version 5.0 of Android is crippling older slabs
Be real, Apple: In-app goodie grab games AREN'T FREE – EU
Cupertino stands down after Euro legal threats
Download alert: Nearly ALL top 100 Android, iOS paid apps hacked
Attack of the Clones? Yeah, but much, much scarier – report
Microsoft: Your Linux Docker containers are now OURS to command
New tool lets admins wrangle Linux apps from Windows
Bada-Bing! Mozilla flips Firefox to YAHOO! for search
Microsoft system will be the default for browser in US until 2020
prev story

Whitepapers

Why and how to choose the right cloud vendor
The benefits of cloud-based storage in your processes. Eliminate onsite, disk-based backup and archiving in favor of cloud-based data protection.
Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Getting ahead of the compliance curve
Learn about new services that make it easy to discover and manage certificates across the enterprise and how to get ahead of the compliance curve.
Top 5 reasons to deploy VMware with Tegile
Data demand and the rise of virtualization is challenging IT teams to deliver storage performance, scalability and capacity that can keep up, while maximizing efficiency.