Feeds

From server to end user: What's coming up for NFS?

Big changes: Hole-punching and application data blocks

Top 5 reasons to deploy VMware with Tegile

Deep Dive NFS (Network File System) is one of two of the most successful file protocols in the history of computing. From the 1980s with NFSv2 through the widely deployed NFSv3 in the 1990s, and now with today’s NFS4.1 standard – and if you don’t know about NFSv4.1 and pNFS (parallel NFS), you should – the protocol has been developed to keep pace with user requirements and the changing nature of data access and processing.

Growth of storage has exploded. In 2010, according to IDC, nearly an exabyte (1,000 petabytes) of open systems storage was sold worldwide. There was more storage sold in one quarter of 2011 than was sold in the whole of 2007. That’s an astonishing growth in the amount of data, so what’s driving it? Certainly, one of the biggest drivers of storage consumption we’ve seen in recent years is the rise of virtualisation and cloud computing. Both have made the management and processing of larger quantities of data much simpler.

These kinds of requirements are driving the next set of proposed changes to the NFS standard: NFSv4.2. NFSv4.1 and pNFS provided the foundations of improved security, scalability and much improved manageability over NFSv3. The latest proposals for NFSv4.2 promise many features that end users have been requesting above and beyond those in NFSv4.1 – features that make NFS more relevant as not only an “everyday” protocol, but one that has application as a preferred distributed file protocol in and beyond the virtualised data centre.

Server side copy

Virtualisation means compute mobility. No longer tied to physical hardware in specific locations, operating systems and applications can seamlessly move from one server to another. But each virtual machine represents data and may refer to ancillary data too, and all that data needs moving. Today, a copy requires potentially costly and unnecessary moves, as a client has to request data from the source data server simply to re-write it to a target data server, a three-way interconnection.

Server-Side Copy (SSC) removes entirely one leg of such a copy operation. Instead of reading entire files or even directories of files from one data server out to the requesting client, and then having the client write them back out to another data server, SSC permits the target and source servers to communicate directly. The client manages the copy process, but isn’t involved in moving the data. Data is moved directly between data servers, and SSC removes the requirements of maintaining costly and high-bandwidth server-to-client-to-server connections, and reduces the potential for congestion on copy operations.

Guaranteed Space Reservation

There’s a limit to the demands that can be met by simply piling on more disks to meet our potential data needs. Every disk costs money to buy, to run, and more to manage. Many storage system administrators are keenly aware that users tend to overestimate their storage requirements; sometimes by orders of magnitude. Over the years, various efficiency techniques have been employed to give the appearance of a large virtual pool of storage on much smaller real storage systems.

One of those techniques, thin provisioning, gives the appearance of large amounts of available space. Although now commonplace, it can be problematic to manage in fast-growing environments, for example, two users both requesting more than 50 per cent of the available free space: both can’t have it.

A guaranteed space reservation feature in NFSv4.2 will ensure that, regardless of the thin provisioning policies, individual files will always have space available for their maximum extent.

Hole-punching

While desirable for specific types of data, and a reassurance for the end-user who needs the space to be available, such guarantees can defeat the best efforts of storage administrators to efficiently utilise disk.

For example, when a hypervisor creates a virtual disk file, it often tries to pre-allocate the space for the file so that there are no future allocation-related errors during the operation of the virtual machine. Applications like this typically zero the entire file, which is inefficient in I/O, and inefficient in storage used.

In support of better storage efficiencies, NFSv4.2 introduces support for such sparse files. Commonly called “hole-punching”, deleted and unused parts of files are returned to the storage system’s free space pool (see figure 1).

SNIA NFS chart

Figure 1: Thin provisioned and hole-punched data

Thin provisioning removes the need for reserving real storage for expansion that may never happen, and the real free space can be shared amongst many users. NFSv4.2’s hole punching takes that one step further, by recognising that files themselves very often contain holes that reserve space, but that contain no useful data. The client’s view is unchanged: NFSv4.2 provides hole punching transparently.

Beginner's guide to SSL certificates

More from The Register

next story
Ellison: Sparc M7 is Oracle's most important silicon EVER
'Acceleration engines' key to performance, security, Larry says
Oracle SHELLSHOCKER - data titan lists unpatchables
Database kingpin lists 32 products that can't be patched (yet) as GNU fixes second vuln
Lenovo to finish $2.1bn IBM x86 server gobble in October
A lighter snack than expected – but what's a few $100m between friends, eh?
Ello? ello? ello?: Facebook challenger in DDoS KNOCKOUT
Gets back up again after half an hour though
Troll hunter Rackspace turns Rotatable's bizarro patent to stone
News of the Weird: Screen-rotating technology declared unpatentable
prev story

Whitepapers

Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Storage capacity and performance optimization at Mizuno USA
Mizuno USA turn to Tegile storage technology to solve both their SAN and backup issues.
The next step in data security
With recent increased privacy concerns and computers becoming more powerful, the chance of hackers being able to crack smaller-sized RSA keys increases.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.
A strategic approach to identity relationship management
ForgeRock commissioned Forrester to evaluate companies’ IAM practices and requirements when it comes to customer-facing scenarios versus employee-facing ones.