Feeds

Facebook tops up Apache Project graph database with fresh code

'You know what's cooler than a billion edges? A trillion edges!'

Security for virtualized datacentres

Facebook has shoved code back into the trunk branch of Giraph, an open source graph-processing Apache project that mimics Google's advanced "Pregel" system.

The upgrades let Giraph process graphs with trillions of edges – the connections between entities in a graph database – and were announced by the company in a blog post on Wednesday, in which engineers also explained why they chose to bring Giraph into the social network's software ecosystem, and how they added to it to let it deal with larger graphs in a less memory intensive way.

Giraph is an implementation of Google's Pregel database, which the Chocolate Factory built to let it mine its vast array of datapoints and spot valuable interconnections. The company published information on Pregel in June, 2009.

Facebook uses Giraph to help it analyse its massive social network, and decided to upgrade its technology in the summer of 2012. By analyzing the data contained in the connections between its peons users and brands and groups, Facebook can almost certainly develop better tools to offer its advertisers.

"Analyzing these real world graphs at the scale of hundreds of billions or even a trillion (10^12) edges with available software was impossible last year. We needed a programming framework to express a wide range of graph algorithms in a simple way and scale them to massive datasets. After the improvements described in this article, Apache Giraph provided the solution to our requirements," the engineers wrote.

The company also evaluated Apache Hive, GraphLab, and Apache Giraph, but plumped for Giraph due to the fact it runs as a MapReduce job, and is written in Java and so can interface well with Facebook's Java stack.

The main contribution Facebook made to the technology was the implementation of multi-threading, which improves the performance of Giraph.

"When Giraph takes all the task slots on a machine in a homogenous cluster, it can mitigate issues of different resource availabilities for different workers (slowest worker problem)," the company wrote. "For these reasons, we added multithreading to loading the graph, computation (GIRAPH-374), and storing the computed results (GIRAPH-615)."

By implementing multithreading, the company has seen linear speed up in some CPU bound applications.

The company has also reduced the overall memory footprint of the system, which in earlier iterations was a "memory behemoth".

It achieves this by serializing vertexs into a byte array rather than a java object, and serializing messages on the server. By doing this the company also gained a predictable memory model for vertexes, which let it better figure out resource consumption by the tech.

"Given that there are typically many more edges than vertices, we can roughly estimate the required memory usage for loading the graph based entirely on the edges. We simply count the number of bytes per edge, multiply by the total number of edges in the graph, and then multiply by around 1.5x to take into account memory fragmentation and inexact byte array sizes."

The company also made enhancement to the aggregator architecture of the technology to remove bottlenecks that had formed when processing large amounts of data.

These improvements have dramatically improved the performance of Giraph, Facebook says, allowing it to run an iteration of page rank on a one trillion-edge social graph – the largest test Giraph has ever undergone.

"The largest reported real-world benchmarked problem sizes to our knowledge are the Twitter graph with 1.5 billion edges... and the Yahoo! Altavista graph with 6.6 billion edges; our report of performance and scalability on a 1 trillion edge social graph is 2 orders of magnitude beyond that scale."

Few companies have to deal with graphs with trillions (or even billions) of edges for now, but as technologies like the internet of things are deployed widely and seas of sensors start beaming data into massive data stores, the tech will become increasingly relevant to organizations other than social networks, ad slingers (Google), and ecommerce shops (Amazon). ®

Website security in corporate America

More from The Register

next story
New 'Cosmos' browser surfs the net by TXT alone
No data plan? No WiFi? No worries ... except sluggish download speed
'Windows 9' LEAK: Microsoft's playing catchup with Linux
Multiple desktops and live tiles in restored Start button star in new vids
iOS 8 release: WebGL now runs everywhere. Hurrah for 3D graphics!
HTML 5's pretty neat ... when your browser supports it
Mathematica hits the Web
Wolfram embraces the cloud, promies private cloud cut of its number-cruncher
Mozilla shutters Labs, tells nobody it's been dead for five months
Staffer's blog reveals all as projects languish on GitHub
'People have forgotten just how late the first iPhone arrived ...'
Plus: 'Google's IDEALISM is an injudicious justification for inappropriate biz practices'
SUSE Linux owner Attachmate gobbled by Micro Focus for $2.3bn
Merger will lead to mainframe and COBOL powerhouse
iOS 8 Healthkit gets a bug SO Apple KILLS it. That's real healthcare!
Not fit for purpose on day of launch, says Cupertino
prev story

Whitepapers

Secure remote control for conventional and virtual desktops
Balancing user privacy and privileged access, in accordance with compliance frameworks and legislation. Evaluating any potential remote control choice.
WIN a very cool portable ZX Spectrum
Win a one-off portable Spectrum built by legendary hardware hacker Ben Heck
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
The next step in data security
With recent increased privacy concerns and computers becoming more powerful, the chance of hackers being able to crack smaller-sized RSA keys increases.