Feeds

GridIron jiggles MLC flash box, penetrates million IOPS barrier

Won't blab on sordid details...

Build a business case: developing custom apps

A start-up called GridIron says its MLC flash technology has broken though the million IOPS barrier, leaving existing products maxing out at 800,000 IOPS behind.

Multi-level cell (MLC) flash is cheaper and slower than single level cell (SLC) NAND. Some SLC flash array products reach the million IOPS mark: the TMS RamSan-630 and Violin Memory's 6616 for example. No MLC products do, and single enclosure MLC products range from 120,000 IOPS (GreenBytes Solidarity) to 450,000 (TMS RamSan-820) and on to 800,000 IOPS (Nimbus Data E-class).

GridIron says it has reached the million IOPS level by not treating its MLC flash as a solid state drive (SSD) replacement for hard disk drives. Its current TurboCharger product is a flash and DRAM-based SAN accelerator box.

Exactly how it is tweaking its MLC flash box to reach the million IOPS mark is not disclosed. We are told: "Rather than handling Flash media as a substitute for hard disks, GridIron is designing solutions that are built around Flash’s special characteristics and capabilities. For instance, GridIron accesses and configures the Flash media to maximise performance while minimising or completely eliminating issues such as wear, performance degradation or the processing and bandwidth limitations of storage controllers."

The beauty of a fast MLC array from GridIron's point of view is that it can store twice as much data as an SLC array with the same number of NAND cells – assuming GridIron is using 2-bit MLC – for less than twice the cost, and the company says this is good for the big data market which is where it targets its products. Kaminario's K2-F MLC flash array stores up up to 100TB and is rated at 600,000 random read IOPS. We can imagine a potential big data-oriented GridIron array that has a similar capacity and runs at 1,000,000 plus IOPS.

GridIron says its caching algorithms, as used in the TurboCharger product, are better than competing products:

Most cache approaches are based on data behaviour for just the data being accessed and then only during the current access. The TurboCharger uses many days of history of the entire data set to decide how to best manage each data access.

Most caches focus on loading the most frequently accessed data. While that may be useful, GridIron uses a much more powerful concept – load data into cache that will most improve the performance of the application.

The TurboCharger determines importance by measuring how much the aggregate I/O bandwidth requested by the application increases as a result of reducing the access time. If improving the access time doesn’t increase the rate at which the application requests data then that data doesn’t need to be a high performance tier. On the other hand individual data such as indirection pointers are often the most critical items to application performance. They are bottlenecks to loading other data and thus may be very critical even though they are accessed infrequently.

The historical importance metadata is used in a feedback loop to ensure that the most importance data is retained in the caches each time it could be used.

It would be good to have some benchmark data to backup these claims.

GridIron says its caching accelerator box can scale by adding so-called booster packs or by clustering several boxes together using a 10GbitE cluster interconnect. It's TurboCharger data sheet (registration required) does not say what the capacity of its GT-1100 product is, coyly remarking that it can accelerate a back-end database of up to 64TB size.

GridIron says it plans "to announce a new type of big data acceleration product" that incorporates its million-IOPS MLC flash performance technology in the first half of this year. ®

Boost IT visibility and business value

More from The Register

next story
The Return of BSOD: Does ANYONE trust Microsoft patches?
Sysadmins, you're either fighting fires or seen as incompetents now
Microsoft: Azure isn't ready for biz-critical apps … yet
Microsoft will move its own IT to the cloud to avoid $200m server bill
Shoot-em-up: Sony Online Entertainment hit by 'large scale DDoS attack'
Games disrupted as firm struggles to control network
Cutting cancer rates: Data, models and a happy ending?
How surgery might be making cancer prognoses worse
Silicon Valley jolted by magnitude 6.1 quake – its biggest in 25 years
Did the earth move for you at VMworld – oh, OK. It just did. A lot
VMware's high-wire balancing act: EVO might drag us ALL down
Get it right, EMC, or there'll be STORAGE CIVIL WAR. Mark my words
prev story

Whitepapers

Implementing global e-invoicing with guaranteed legal certainty
Explaining the role local tax compliance plays in successful supply chain management and e-business and how leading global brands are addressing this.
Endpoint data privacy in the cloud is easier than you think
Innovations in encryption and storage resolve issues of data privacy and key requirements for companies to look for in a solution.
Scale data protection with your virtual environment
To scale at the rate of virtualization growth, data protection solutions need to adopt new capabilities and simplify current features.
Boost IT visibility and business value
How building a great service catalog relieves pressure points and demonstrates the value of IT service management.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?