NFSv4: A Unix Mainstay Learns New Tricks
Linux NFS Efforts Progressing
NFS has traditionally been a semi-robust method of sharing files between Unix-based computers. The IETF has been working on NFSv4 since early 2000, and implementations have finally started springing up everywhere. The Linux kernel team has focused its efforts in NFSv4, providing its least buggy NFS implementation yet. If that alone isn't reason enough to start using v4, read on.
Some key features are:
- POSIX ACL support, including Windows ACL interoperability.
- Locking enhancements, including advisory and mandatory locks.
- Data replication or migration is made easier with NFS's help.
- TCP-only, with tons of improvements, making NFS over WAN links viable.
- No more portmap, lock manager, mount and RPC hell; NFSv4 uses RPC, but all over port 2049.
- Security, for the first time: authentication, cryptographic integrity and encryption are all possible.
In short, NFSv4 has addressed every major complaint ever registered about NFS. Being a more robust mechanism for sharing files means that open source folk were enthusiastic about implementing it, and they have done well. Companies poured money into the Open Source Development Lab (OSDL) in Beaverton, Oregon to stimulate more robust testing and development of NFSv4 in Linux. Solaris, Linux, AIX, (and other Unixes) and Windows can successfully share files using the new NFS protocol, with no insurmountable compatibility issues.
To expand on the highlights of NFSv4 outlined above, let's begin with ACL support. A fundamental change in the way NFS looks at files was needed. The new model makes sense to Unix as well as Windows, and supports an extended set of permissions attributes. Even the notion of a File Handle has been completely rethought and for conceptual purposes, can be thought of as deprecated.
NFS has always been good at dealing with network failures. Writes to the file systems will block, and when operations resume, they will complete. One limitation has always been with locking, though. NFSv4 now supports a finer granularity in locking, implementing advisory and mandatory lock mechanisms. This means that clients can choose to lock files at more than just "I'm using it" levels, allowing greater amounts of concurrent access to files.
Date replication allows easy copies of file systems to be propagated to multiple servers, and some NFSv4 implementations (AIX, most notably) can even redirect a client to the appropriate server. Many companies are talking about ways to make NFS capable of failover, and IBM has implemented it already. Data migration is also part of the v4 specification, which can provide a simply way to move NFS services and the related data to new hardware.
NFS has historically been very bad over WAN, or high-latency links. For reliability, TCP has always been available, but performance has always been bad across non-local networks. UDP functionality has been removed, making TCP the only option. Couple that with tons of performance enhancements and WAN operation is not only possible, but very efficient. The protocol is also self-contained, enabling Internet usage without opening gaping holes in firewalls. Locking and mounting file systems all happen over port 2049, and if NFSv4 is the only NFS protocol enabled, opening that to the Internet can be quite secure.
Security had to be addressed if NFSv4 was to become an Internet-accessible protocol. The RPCSEC_GSS protocol is required for version 4 implementations, which means it will support: Kerberos v5, LIPKEY, and SPKM-3. A server will control which is allowed, along with the requirements for authentication and encryption. The new school of thought for NFS, similar to what CIFS in Windows requires, is that individual users get authenticated, not just the machines they are on.
If you're running NFS in a small environment on newly installed Linux or Solaris machines, you're probably already running NFSv4 in AUTH_SYS mode. That is the default, crufty old way for authenticating systems that NFS has always used. It "just works" for most people. For those in a larger environment, probably with multiple DNS subdomains accessing the same file server, you'll probably run into problems.
To switch from earlier versions to NFSv4, the first order of business is to understand NFS domains.
The purpose of domains in NFS is to allow a more robust security policy for user access, and at the same time provide a nice management mechanism. In previous NFS versions, the identity of a person was the same, regardless from where they accessed an NFS share. If you exported a file system to multiple computers, user Bob with UID 3333 on one computer would own all of the files from both locations, even if Bob wasn't UID 3333 on all systems. If someone was root on a different computer, they could simply create a user with UID 3333 and they had access to Bob's files. This made Bob quite unhappy.
With NFSv4 domains, usernames (and groups) to UID (or GID) mappings are done based on "UID@domain," eliminating many shortcomings in NFS. Coupled with authentication, NFS deployments can now become very secure, manageable and arbitrarily complex. This is a giant leap forward. The easiest method for a smooth transition is to configure all your hosts to have the same domain. Of course you'll eventually want to use the authentication mechanisms that NFS provides, but for trouble-free upgrades to the more robust NFSv4, the only real gotcha is configuring NFS domains.
We'll explain more about domains when we follow up on implementation, next week.
Countless times Unix administrators have half-joked about using Samba and Windows file sharing protocols for authenticated access to Unix shares. Those thoughts can safely be forgotten, and there's no need to feel dirty any longer! NFSv4 has come to the rescue, finally.
Solid state disks (SSDs) made a splash in consumer technology, and now the technology has its eyes on the enterprise storage market. Download this eBook to see what SSDs can do for your infrastructure and review the pros and cons of this potentially game-changing storage technology.