Web servers get 'leccy bill
SPECweb2009 fights the power
The Standard Performance Evaluation Corporation, or SPEC for short, has been providing benchmarks for PCs and servers for more than two decades, and in the past year, it has been adding power components to its benchmark suites. SPEC and server players AMD, Fujitsu, Hewlett-Packard, IBM, Intel, and Sun Microsystems have got together and created a new power-aware web serving benchmark called SPECweb2009.
This new test is a companion to the SPECpower_ssj2008 test that debuted in December 2007, and it's probably going to be the workload that is used to measure servers that get the Energy Star for servers seal of approval from the U.S. Environmental Protection Agency.
The SPECpower_ssj2008 test is meant to emulate a typical business-class Java application stack, and it exercises processors, cache, memory, and processor scalability in multiprocessor systems. Tweaks to the Java stack and the operating system can also help boost performance on the test, but this is true of all benchmarks.
The SPECweb2009 test, by contrast, is designed to emulate Web server performance, and it's actually comprised of three different workloads: an online banking application with SSL encryption, an e-commerce online store with a mix of encrypted and unencrypted transactions, and a tech support site with lots of downloads not using SSL encryption. This is the same set of applications used in the SPECweb2005 benchmark, but the addition of power measurements changes the nature of the test, so results are not comparable. SPECweb2009 also allows for either Java or PHP to be the language used on the Web application server.
Both SPECweb2005 and SPECweb2009 run all three workloads in sequence a box, but as is the case with the SPECpower_ssj2008 test, SPECweb2009 runs at different system loads - from the peak number of sessions (100 per cent capacity) down to idle (0 per cent, but still burning electricity just sitting there) in increments of 20 per cent of the peak sessions - and measures the power consumed and throughput at each loading. The final rating on SPECweb2009 can be either peak throughput (the average of the banking, e-commerce, and support workloads) or a power metric that is calculated by adding up the sum of the performance on the e-commerce workload and dividing it by the sum of the watts consumed in each band.
Let me give you an example so this makes more sense. Take the Fujitsu Primergy TX150 S6 server, which is a single-socket Intel box using a quad-core L3360 processor. Fujitsu configured this entry tower server with 8 GB of memory, six 146 GB 10K RPM drives, and Red Hat Enterprise Linux 5.3 with the ext2 file system and with Accoria Network's Rock Web Server 1.4.7. This puppy server could handle a maximum of 27,300 sessions on the banking application burning 188 watts at the system level; 32,300 sessions on the e-commerce application burning 185 watts; and 14,100 sessions on the support application burning at 176 watts. (See, power consumed really is dependent on the workload). So the official SPECweb2009_JSP_Peak rating for this box is the average of those three numbers, or 23,167 users at 183 watts.
Now, for the official SPECweb2009 power rating, you drill down into the e-commerce test. While for the peak number of users - which was 32,300 - the power consumed by the Fujitsu server was 186 watts, the machine burned 117 watts just sitting there with an idle operating system and middleware stack. At 20 per cent of peak (6,460 users), the machine burned 144 watts, and every additional 6,460 users added another 10 watts or so until it went a little wiggly above 60 per cent of load. The end result is a SPECweb2009_JSP_Power rating of 103 users per watt.
The only other machine tested using the new web serving benchmark so far is an HP ProLiant DL370 G6 rack server using two top-end Intel W5580 "Nehalem EP" processors (that's eight cores in total) with 96 GB of memory plus 29 15K RPM disks, all but two of them in external arrays. (The SPECweb2009 test has to measure the power used by external disk arrays, so there's no cheating there). This machine used the same software stack chosen by Fujitsu above.
While this two-socket Nehalem EP box from HP could do more work - it had a SPECweb2009_JSP_Peak of 95,634 users - it took an average of 725 watts of wall power to support that peak performance on the three workloads (this box idled at 496 watts). Still, the HP had almost the same performance to power ratios on the e-commerce test, and it came out with the same SPECweb2009_JSP_Power rating of 103 users per watt. ®
Sponsored: Global DDoS threat landscape report