Presto: Facebook reveals exabyte-scale query engine
'Fast queries over a 250 PETABYTE data warehouse? That's nothing!'
Facebook has revealed a query engine for data warehouses that blows the doors off Hive, and plans to publish it as open source this year.
The "Presto" technology is a query execution engine built for Facebook's vast data warehouse. It was announced on Thursday at a data analytics conference hosted at Facebook's HQ in Menlo Park, California. Presto gets rid of some of the failings of Hive – the Hadoop data-warehouse tool – and highlights how the Hadoop ecosystem is maturing.
"We built Presto from the ground up to deal with FB scale," Facebook engineer Matin Traverso, says. "It can handle all the 250PB of data we have in our data warehouse – thousands of machines across multiple global regions."
Presto has demonstrated a four-to-seven times improvement over Hadoop Hive for CPU efficiency, and is eight to 10 times faster than Hive in returning the results of queries.
"The problem with Hive is it's designed for batch processing," Traverso said. "We built Presto from the ground up to deal with FB scale."
Like these, Presto gets rid of the job part of Hadoop – MapReduce – and instead uses a special-purpose query engine, which is
SQL-like ANSI-SQL compatible, with some additional features that Facebook will reveal in the next few months.
This is for both ease of use by Facebook developers, and to supercharge the performance of queries over very, very big datasets.
"One of the things Presto can do that MapReduce can't – Presto can start all the stages at once and can stream all the data through the stages," Traverso says.
Before our dear commentards point out that most of Facebook is pictures of cats, updates about bodily functions, nihilistic ramblings, and the pingings of Zynga games feeding e-stims to folk, it bears noting that none of this really matters for designing massive data systems – when you abstract away from the content, you have a set of different things that are deluging your system in data, and you need to deal with them.
And Facebook has more data than most. The company's existing data warehouse is 250PB in size, and growing rapidly: 600TB is added to the warehouse every day.
"As we project our growth, it's quite clear that at some point soon we will reach one exabyte," Ravi Murthy, a Facebook engineering manager, says. "We have to rethink a lot of different things. Not just the software pieces of it, but literally the entire stack."
Most of Facebook's data ends up being stored in the Hadoop Distributed File System, so although some may question why Facebook doesn't just use a SQL DB engine for its queries, the reason is that it needs to have as few layers of abstraction between it and the underlying HDFS data. For that reason, creating add-ons that inteface directly with HDFS, such as Presto, is better for performance than abstracting away.
Since launching at the end of last year, Presto has grown to have 850 internal users per day performing 27,000 queries and fiddling with 320TB of data. Scale aside, the adoption is impressive given Facebook's penchant for a flat organizational stucture that means engineers are not forced to use any particular software package – they either do or they don't, and an app's fortunes are tied closely to its adoption. Presto seems to have been given the thumbs up.
The software should be available as open source by the end of this year. ®
Sponsored: Benefits from the lessons learned in HPC