A storage memory appliance to process massive Big Data sets in real-time is being developed by startup Levyx, with help from Toshiba and the reviving OCZ business in the flash department.
Levyx was founded in 2013 by Reza Sadri, CEO, previously sTec’s chief technical officer, and Tony Givargis, a computer science professor at the University of California, Irvine, in which Orange County-based city we find Levyx’s HQ. The two say they want to fundamentally disrupt the economics of Big Data processing.
Levyx says current real-time Big Data analytics systems use scale out systems with lots of costly RAM to process large data sets. It’s better to use a combination of RAM and cheaper flash, pushed into the memory hierarchy, instead, we're told. Levyx says that fewer compute nodes will be needed with its approach, and its technology is applicable to multi-terabyte or larger data sets with billions to hundreds of billions of data items.
To complete the back story WD bought sTec for $340m in mid-2013.
It’s not just about flash hardware, Sadri says: “Our ultra-low latency software paired with high-performance SSDs represent a better and more cost-effective alternative to traditional scale-out architectures that rely heavily on DRAM-constrained systems.”
Levyx says its technology is able to deliver in-memory performance for a fraction of the investment in DRAM and data centre infrastructure. used by current approaches.
Its key:value store technology features a Helium data engine which, it says:
- Has an ultra-low-latency (measured in the micro-seconds)
- Can balance system/network resources in ways that fully optimise commodity servers
- Is designed to take advantage of the latest multi-core CPUs
- Can achieve an unprecedented combination of performance, cost and scalability
- Is an alternative to conventional in-memory implementations that can cost 15x-20x more
- Is highly scalable and able to run on any hardware and on any OS, in any Big Data environment.
In a blog Sadri wrote:
Current data access platforms are not built to scale well as the number of CPU cores increase, following Moore’s law. In addition, current data access methods, and even basic algorithm and data structures, are designed for accommodating the vast speed difference between the solid-state based components in the computer (CPU and memory) with the mechanical disks (i.e. HDDs). The emergence of multi-core systems and SSDs changes many of these underlying assumptions of how efficiently information can pass through the data path.
He continued: “Using a set of algorithms and data structures that are crafted to exploit the latest hardware technologies and trends, we have built a data engine that has in-memory like performance but uses SSDs.”
Levyx says its Helium API enables an application to store and retrieve data seamlessly from local storage or a storage device on a remote host. Most of its data is stored in flash and its data structures are lock-free and can scale to many cores. The software layer abstracts storage technology to existing applications on any OS and hardware platform.
There’s obviously much more information to come out and this is a very early-stage startup, with only an undisclosed amount of seed funding to get it going.
Sponsored: Webcast: Simplify data protection on AWS