SSDs in the enterprise: It's about more than just speed

They're fast, sure, but they have so much more to offer

Enterprise solid state drives are gaining traction, but their predominant focus is still performance. The need for speed has driven SSDs into applications where HDDs previously reigned, but for those of us who aren’t high-frequency traders, solid state will need to demonstrate some other benefits. What are they, and how important will they be in mainstream enterprise apps?

The traditional focal point for SSD has been in applications where the financial gain from performance increases is clear, says Tom Coughlin, founder of data storage consulting firm Tom Coughlin Associates. High-frequency trading is a good example.

“High performance enterprise applications like databases, OLTP, etc. are the first applications to move to all-SSD, or perhaps SSD with other non-volatile memory technologies (such as 3D XPoint or RRAM),” he says. “The reason why these will be first is that for these applications, time is money, so the payback is immediate.”

SSDs aren’t just a high-performance technology anymore, though, according to Frank Reichart, senior director product marketing for Fujitsu’s storage operation.

He believes that IT purchasers are beginning to take a more rounded view of SSD that takes more than pure speed into account. “Besides performance (response time and IOPS) and storage agility we see more and more the TCO aspect as the main motivation for ‘all-flash’ deployments as general purpose storage (for almost any productive workload),” he says. Reichart outlines several areas in which he believes SSD can help to cut ownership costs for IT departments, including data centre space, power management, and server utilization.

Hyperconvergence is one area where SSDs stand to gain particular traction, say experts. We recently saw Simplivity offering all-flash hyperconverged boxes, two years after Nutanix first rolled them out. Its rationale was that the market price was right, following a marked slump in NAND flash pricing over the last 18 months. Part of the appeal comes down to space, says Frank Berry, founder and senior analyst at market research firm IT Brand Pulse, who says that SSDs provide higher performance in far smaller packages.

“One server-sized package can replace a rack of HDD shelves. I expect as hyperconvergence grows, it will increase the percentage of server-based SSDs and PCI SSDs versus SAN SSD,” he says.

Space considerations

The space advantage for SSDs doesn’t necessarily come down to their form factor. The memory component itself may be smaller but both SSDs and HDDs adhere to standard physical sizing when mounted in the data centre. What really makes a difference according to Alan Niebel, president of non-volatile memory and storage semiconductor market research company Webfeet Research, is the number of each kind of device that you need to achieve the same input/output speeds.

Because hard drives are limited by mechanical factors including their spin, the number of inputs or outputs per second (IOPS) is relatively low. Niebel puts a 15,000 rpm HDD at about 180 IOPS. Comparatively, he says that an SSD could achieve around 200,000. These figures will vary depending on the type of SSD (DRAM replacement vs bulk storage SSD, say) but for a mid-range SSD designed for enterprise applications, the numbers aren’t unrealistic. The order of magnitude difference is what’s important. Storage admins typically configure units into a logical unit number (LUN) to deliver the number of IOPS that they need. Let’s assume that our SSD could only push out 30,000 IOPS.

“In order to put enough hard drives to get to 30,000 IOPS you have to put 300 or more of these hard drives in a server that needs its own power supply, its own cooling, and obviously a processor as well, and so it takes up a tremendous amount of floor space in the data centre,” Niebel points out.

Higher capacity, lower power

SSDs can also be used to increase data storage capacity in a data centre, because it makes deduplication possible at the primary storage level. Storing and retrieving deduplicated data can often involve random access rather than sequential reads and writes, due to the use of indexes containing pointers to show how a single shared block of data relates to other data blocks held across the drive. This can introduce read latencies that make it prohibitively slow in many HDD scenarios but which are acceptable when using flash storage.

Depending on the amount of duplication you’re expecting in your data, this can make SSD a cost-effective form of primary storage for some applications. However, it’s also important to remember that some workloads can be made to handle this at the application level, removing the need for that logic in the primary storage controller at all.

Power consumption can also represent cost savings in enterprise SSDs. “Datacentres are maxed out in terms of how much power they can draw,” said Niebel. SNIA reports power savings of over 90 per cent for SSDs compared to HDDs in both idle and data transfer modes. The same report also suggests temperatures roughly a third lower for SSDs than for HDDs, which will have some bearing on cooling (acknowledging, of course, that CPU temperature can impose a far bigger overhead).

Sponsored: Minds Mastering Machines - Call for papers now open




Biting the hand that feeds IT © 1998–2018