Oracle claims 70X speed-up with MySQL Cluster 7.2
New cluster capable of a billion queries a minute
Oracle has released its latest GPL update to its MySQL cluster, with huge speed boosts promised, better support for web users added, and new support for NoSQL integration.
MySQL Cluster 7.2 will be able to process a billion queries per minute and 110 million updates per minute, Oracle claims, giving users a 70X increase in performance on complex queries. The company also claims 99.999 per cent reliability.
For those web businesses fond of NoSQL, there’s now an optional integration tool with the new cluster via a new memcached API for faster reading and writing. This allows complex queries via MySQL to be combined with the speed and flexibility of NoSQL, Oracle says. Short-term data can be stored in memcached server spaces, and the more frequently accessed data can be handled by MySQL.
"MySQL Cluster 7.2 demonstrates Oracle's investment in further strengthening MySQL's position as the leading Web database," said Tomas Ulin, vice president of MySQL Engineering at Oracle in a statement. "The performance and flexibility enhancements in MySQL Cluster 7.2 provide users with a solid foundation for their mission-critical Web workloads, blending the best of SQL and NoSQL technologies to reduce risk, cost and complexity."
For admins, MySQL Cluster Manager release version 1.1.4 has been pushed out as well, to give IT managers more options in how their databases are managed, and allow greater automation of common tasks. ®
Re: Re: Re: Erm...
The 70x join improvement and the millions of transactions per second are separate improvements. The 70x join improvements are due to executing joins in parallel, closer to the data, minimising data transfer. The 70x improvement is observed between running queries involving joins on the same hardware+software with the new 'AQL' functionality off, then on. The '1 billion qpm' headline transaction throughput improvements are due to increased multithreading within the system processes. You are correct that the previous results were executed on different hardware, so it's hard to determine how much the software changes have brought to the table.
The '1 billion qpm' benchmark was executed against in-memory tables, so disk IO was not a factor in reaching the throughput. The transaction types are primary key reads, retrieving rows with 25 integer column, e.g. a read of ~100 bytes of actual data. No joins are occurring here. The queries are similar, but the results are not cached. So effectively this is 1 billion random reads of 100 bytes per minute / 17.6 million random reads of 100 bytes per second. The data is distributed across 8 different machines. The flexAsynch benchmark used is described in more detail here : http://dev.mysql.com/downloads/benchmarks.html and here : http://mikaelronstrom.blogspot.com/2012/02/105bn-qpm-using-mysql-cluster-72.html
You are correct to be suspicious of any vendor benchmark's real world applicability. The best that can be said is that if each vendor gets the maximum from their systems then those results might be comparable.
Trying to play catch up with Postgres which is, of course, threatening Oracle's main breadwinner: Oracle itself.
No - the 70x comes from a feature called Adaptive Query Localization which pushes JOIN operations down to the distributed data nodes where they are executed in parallel, on local copies of the data, and then return a single result set to the application
The article posted here gives more detail, and the information about the benchmarks and Memcached integration: