Original URL: http://www.theregister.co.uk/2008/12/30/online_architectures/

Social networks talk hidden architectures

Back-stage bytes

By John K Waters

Posted in Software, 30th December 2008 17:02 GMT

Social networks are almost pervasive. Even if you're not actually on one, it's becoming impossible to avoid hearing of them and often it's the same networks that keep popping up, such as Facebook or MySpace.

While they might be well known, though, the companies tend not to discuss the architectures that underpin their services. Are they running some cleaver RESTful science behind the scenes or are they just using a vanilla combo of PHP on lots and lots of Windows or Linux servers?

Facebook, MySpace, Digg and Ning recently shared their trials and tribulations at the QCon conference in San Francisco, California.

Dan Farino, chief systems architect at MySpace.com, said his site started with a very small architecture and scaled out. He focused on monitoring and administration on a Windows network and the challenge of keeping the system running on thousands of servers.

"Yes, we run Windows!" he said. "It's actually a pretty good server platform. IIS is a pretty good web server. Tuned properly, it's going to serve pages; it's not going to crash or tip over when it gets Slashdotted with two requests. It's pretty solid. What isn't solid about Windows is the large-scale management tools."

MySpace relies on 4,500-plus Windows-based web servers. A middle-tier cache has been added, but it's still "basically a bunch of servers from an operational perspective," Farino said.

Data challenge

A key challenge for MySpace was to come up with the tools for quickly collecting data when there's a problem, so that those problems could be analyzed and avoided in the future. To collect the operational data needed for proper analysis, Farino developed a custom performance monitoring system that tracks real-time CPU requests queued, request per second, and similar information live across the company's server farm.

Digg.com is the largest content aggregator on the Net, with 3.5 million registered users. Lead architect Joe Stump gave a peak behind the scenes of a system that handles about 15,000 requests per second and reports serving approximately 26 million unique visitors per month.

Stump described the system's innards as "an architecture in transition."

"I call it that because Digg started out as a harebrained idea," Stump said. "It wasn't one of those projects where you start out saying, what happens if in a year I'm doing 300 thousand diggs a day and 15 billion common diggs?"

Digg.com uses MySQL, but on top of that is a library designed to interact with the DBMS in a specialized manner, Stump said. "Scaling is specialization," he added. "You can't just take a commercial product off the shelf, throw it into production and hope it works. The way you normally scale things is to take a few different components, layer something on top of it that has your specialization stuff in it."

Facebook has evolved into a huge social network. It has more than 120 million active users and 10 billion photos, and serves up 50 billion page views per month.

Aditya Agarwal, director of engineering, laid out the system that drives this. Many of Facebook's user-facing pages are run off the LAMP stack - Linux, Apache, MySQL, and PHP. For things that don't work well in that stack, the company writes custom services.

PHP is the company's preferred language, he said, because it's a good language for web apps, has a strong community, and it's good for rapid iteration. But PHP is tough to scale for large code bases, so they had develop customizations and optimizations using the memcache distributed memory object caching system.

"I love memcache," he said. "It's what makes our site super quick."

MySQL proved to be quick and reliable for user data - Facebook has suffered no data losses since it started using it, he said. But it made the logical migration of data almost impossible. They handled load balancing by creating a large number of logical instances and spreading them across a number of physical nodes.

The Ning and I

After his work on the the Mosaic browser and having co-created browser pioneer Netscape Communications, net whizz-kid Marc Andreessen went on to co-found social network Ning. The company's infrastructure today powers nearly 610,000 independent social networks. The T. Boone Pickens alternate energy site among them. Pickens drew national attention through his ads during the recent US presidential election. "It's like having 600,000 different instances of front and back end, all working together," senior vice president of product engineer Jay Parikh said.

The site's architecture lets developers get into the code level to modify their networks. Developers can write their own code and run it in the Ning back end or its execution environment. And all the features built in the social network layer are exposed as platform APIs, he said.

Ning uses Java for its back-end systems and PHP for the front end. Ruby, Python, Solaris, and Windows also have roles in the Ning system. And the apps themselves call and invoke hundreds of different REST services. A discovery layer allows services to find each other for dependencies, and a dynamic content store. Among other security measures, Ning has developed some custom tools to provide each social network with its own sandbox within the PHP environment.

Dan Pritchett, chief platform architect at e-business social network Rearden Commerce and a former technical fellow at eBay speaking at QCon, summed up the challenge they all face. "Society has been changed dramatically by the internet," he said. "We're now able to interact with people on a scale that we never could before.

"But the concept of social networks and interconnections between individuals doesn't really scale very well. We've always treated users as individuals, rather than parts of a community. As we change that point of view, we get more and more interesting challenges."®