Hazelcast signs Java speed king to its in-memory data-grid crew
Ex-Ehcache man takes CTO crown as VC cash pours in
In-memory data-grid specialist Hazelcast has landed the guru behind Java caching framework Ehcache as its chief technology officer.
Greg Luck is joining Hazelcast to refine its in-memory data-grid product for enterprises and to develop paid-for packages.
Luck is famous for leading Ehcache, the most widely used Java cache architecture in the business, with 2.5 million deployments.
Luck sold the intellectual property rights he held on Ehcache to in-memory start-up Terracotta in 2009, which was bought by Software AG in 2011. He told The Reg that Ehcache would continue to develop under Teracotta and Software AG without him.
His recruitment by Hazelcast comes as the five-year-old company seems to be ramping up.
Hazelcast announced its $2.5m A-round of VC cash in September, and said SpringSource daddy Rod Johnson had joined its board. The company expects to take another round of venture funding by the end of this year, it told The Reg.
The company claims Global 2000 telcos, banks and tech companies are using its open-source caching framework for a variety of mission-critical apps. These include mobile messaging and high frequency trading.
Hazelcast said it’s going after Oracle’s Coherence architecture in new deployments.
Luck told The Reg his priorities at Hazelcast are to work on caching and operational storage, working out what features the company can add to its free, open-source version and what can come ready to install out of the box that enterprises are willing to pay for.
He said he wants to work on web sessions, management centres, cross platform clients and WAN.
Hazelcast believes caching should be used as a platform for applications to be built on, not simply as a prop to speed up performance of data access. The idea is that using Hazelcast you can lash together hundreds of nodes to build pools of hundreds of gigabytes of memory with nano-second access.
Hazelcast vice president of marketing and developer relations Miko Matsumura said: “People understand that if you are holding a very large transitional memory space with difference machines and processors, you gain the ability to build running transitions on top of an operational stream.
“The old way of using this is that the database is king. We are making things faster but the future is that the application will run on top of the stream and that distributed computation happens on the data in real time... that’s where we feel the industry should go.” ®
Sponsored: Benefits from the lessons learned in HPC