Feeds

GridIron jiggles MLC flash box, penetrates million IOPS barrier

Won't blab on sordid details...

Next gen security for virtualised datacentres

A start-up called GridIron says its MLC flash technology has broken though the million IOPS barrier, leaving existing products maxing out at 800,000 IOPS behind.

Multi-level cell (MLC) flash is cheaper and slower than single level cell (SLC) NAND. Some SLC flash array products reach the million IOPS mark: the TMS RamSan-630 and Violin Memory's 6616 for example. No MLC products do, and single enclosure MLC products range from 120,000 IOPS (GreenBytes Solidarity) to 450,000 (TMS RamSan-820) and on to 800,000 IOPS (Nimbus Data E-class).

GridIron says it has reached the million IOPS level by not treating its MLC flash as a solid state drive (SSD) replacement for hard disk drives. Its current TurboCharger product is a flash and DRAM-based SAN accelerator box.

Exactly how it is tweaking its MLC flash box to reach the million IOPS mark is not disclosed. We are told: "Rather than handling Flash media as a substitute for hard disks, GridIron is designing solutions that are built around Flash’s special characteristics and capabilities. For instance, GridIron accesses and configures the Flash media to maximise performance while minimising or completely eliminating issues such as wear, performance degradation or the processing and bandwidth limitations of storage controllers."

The beauty of a fast MLC array from GridIron's point of view is that it can store twice as much data as an SLC array with the same number of NAND cells – assuming GridIron is using 2-bit MLC – for less than twice the cost, and the company says this is good for the big data market which is where it targets its products. Kaminario's K2-F MLC flash array stores up up to 100TB and is rated at 600,000 random read IOPS. We can imagine a potential big data-oriented GridIron array that has a similar capacity and runs at 1,000,000 plus IOPS.

GridIron says its caching algorithms, as used in the TurboCharger product, are better than competing products:

Most cache approaches are based on data behaviour for just the data being accessed and then only during the current access. The TurboCharger uses many days of history of the entire data set to decide how to best manage each data access.

Most caches focus on loading the most frequently accessed data. While that may be useful, GridIron uses a much more powerful concept – load data into cache that will most improve the performance of the application.

The TurboCharger determines importance by measuring how much the aggregate I/O bandwidth requested by the application increases as a result of reducing the access time. If improving the access time doesn’t increase the rate at which the application requests data then that data doesn’t need to be a high performance tier. On the other hand individual data such as indirection pointers are often the most critical items to application performance. They are bottlenecks to loading other data and thus may be very critical even though they are accessed infrequently.

The historical importance metadata is used in a feedback loop to ensure that the most importance data is retained in the caches each time it could be used.

It would be good to have some benchmark data to backup these claims.

GridIron says its caching accelerator box can scale by adding so-called booster packs or by clustering several boxes together using a 10GbitE cluster interconnect. It's TurboCharger data sheet (registration required) does not say what the capacity of its GT-1100 product is, coyly remarking that it can accelerate a back-end database of up to 64TB size.

GridIron says it plans "to announce a new type of big data acceleration product" that incorporates its million-IOPS MLC flash performance technology in the first half of this year. ®

Secure remote control for conventional and virtual desktops

More from The Register

next story
HP busts out new ProLiant Gen9 servers
Think those are cool? Wait till you get a load of our racks
Shoot-em-up: Sony Online Entertainment hit by 'large scale DDoS attack'
Games disrupted as firm struggles to control network
Community chest: Storage firms need to pay open-source debts
Samba implementation? Time to get some devs on the job
Like condoms, data now comes in big and HUGE sizes
Linux Foundation lights a fire under storage devs with new conference
Silicon Valley jolted by magnitude 6.1 quake – its biggest in 25 years
Did the earth move for you at VMworld – oh, OK. It just did. A lot
Forrester says it's time to give up on physical storage arrays
The physical/virtual storage tipping point may just have arrived
prev story

Whitepapers

5 things you didn’t know about cloud backup
IT departments are embracing cloud backup, but there’s a lot you need to know before choosing a service provider. Learn all the critical things you need to know.
Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Backing up Big Data
Solving backup challenges and “protect everything from everywhere,” as we move into the era of big data management and the adoption of BYOD.
Consolidation: The Foundation for IT Business Transformation
In this whitepaper learn how effective consolidation of IT and business resources can enable multiple, meaningful business benefits.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?