Feeds

Facebook tops up Apache Project graph database with fresh code

'You know what's cooler than a billion edges? A trillion edges!'

The Power of One eBook: Top reasons to choose HP BladeSystem

Facebook has shoved code back into the trunk branch of Giraph, an open source graph-processing Apache project that mimics Google's advanced "Pregel" system.

The upgrades let Giraph process graphs with trillions of edges – the connections between entities in a graph database – and were announced by the company in a blog post on Wednesday, in which engineers also explained why they chose to bring Giraph into the social network's software ecosystem, and how they added to it to let it deal with larger graphs in a less memory intensive way.

Giraph is an implementation of Google's Pregel database, which the Chocolate Factory built to let it mine its vast array of datapoints and spot valuable interconnections. The company published information on Pregel in June, 2009.

Facebook uses Giraph to help it analyse its massive social network, and decided to upgrade its technology in the summer of 2012. By analyzing the data contained in the connections between its peons users and brands and groups, Facebook can almost certainly develop better tools to offer its advertisers.

"Analyzing these real world graphs at the scale of hundreds of billions or even a trillion (10^12) edges with available software was impossible last year. We needed a programming framework to express a wide range of graph algorithms in a simple way and scale them to massive datasets. After the improvements described in this article, Apache Giraph provided the solution to our requirements," the engineers wrote.

The company also evaluated Apache Hive, GraphLab, and Apache Giraph, but plumped for Giraph due to the fact it runs as a MapReduce job, and is written in Java and so can interface well with Facebook's Java stack.

The main contribution Facebook made to the technology was the implementation of multi-threading, which improves the performance of Giraph.

"When Giraph takes all the task slots on a machine in a homogenous cluster, it can mitigate issues of different resource availabilities for different workers (slowest worker problem)," the company wrote. "For these reasons, we added multithreading to loading the graph, computation (GIRAPH-374), and storing the computed results (GIRAPH-615)."

By implementing multithreading, the company has seen linear speed up in some CPU bound applications.

The company has also reduced the overall memory footprint of the system, which in earlier iterations was a "memory behemoth".

It achieves this by serializing vertexs into a byte array rather than a java object, and serializing messages on the server. By doing this the company also gained a predictable memory model for vertexes, which let it better figure out resource consumption by the tech.

"Given that there are typically many more edges than vertices, we can roughly estimate the required memory usage for loading the graph based entirely on the edges. We simply count the number of bytes per edge, multiply by the total number of edges in the graph, and then multiply by around 1.5x to take into account memory fragmentation and inexact byte array sizes."

The company also made enhancement to the aggregator architecture of the technology to remove bottlenecks that had formed when processing large amounts of data.

These improvements have dramatically improved the performance of Giraph, Facebook says, allowing it to run an iteration of page rank on a one trillion-edge social graph – the largest test Giraph has ever undergone.

"The largest reported real-world benchmarked problem sizes to our knowledge are the Twitter graph with 1.5 billion edges... and the Yahoo! Altavista graph with 6.6 billion edges; our report of performance and scalability on a 1 trillion edge social graph is 2 orders of magnitude beyond that scale."

Few companies have to deal with graphs with trillions (or even billions) of edges for now, but as technologies like the internet of things are deployed widely and seas of sensors start beaming data into massive data stores, the tech will become increasingly relevant to organizations other than social networks, ad slingers (Google), and ecommerce shops (Amazon). ®

Reducing security risks from open source software

More from The Register

next story
NO MORE ALL CAPS and other pleasures of Visual Studio 14
Unpicking a packed preview that breaks down ASP.NET
Apple fanbois SCREAM as update BRICKS their Macbook Airs
Ragegasm spills over as firmware upgrade kills machines
Cheer up, Nokia fans. It can start making mobes again in 18 months
The real winner of the Nokia sale is *drumroll* ... Nokia
Mozilla fixes CRITICAL security holes in Firefox, urges v31 upgrade
Misc memory hazards 'could be exploited' - and guess what, one's a Javascript vuln
Put down that Oracle database patch: It could cost $23,000 per CPU
On-by-default INMEMORY tech a boon for developers ... as long as they can afford it
Google shows off new Chrome OS look
Athena springs full-grown from Chromium project's head
Apple: We'll unleash OS X Yosemite beta on the MASSES on 24 July
Starting today, regular fanbois will be guinea pigs, it tells Reg
prev story

Whitepapers

Top three mobile application threats
Prevent sensitive data leakage over insecure channels or stolen mobile devices.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Boost IT visibility and business value
How building a great service catalog relieves pressure points and demonstrates the value of IT service management.
Designing a Defense for Mobile Applications
Learn about the various considerations for defending mobile applications - from the application architecture itself to the myriad testing technologies.
Build a business case: developing custom apps
Learn how to maximize the value of custom applications by accelerating and simplifying their development.