Remote Disk Access with NFS
Introduction
Samba is usually the solution of choice when you want to share disk space between
Linux and Windows machines. The Network File System protocol (NFS)
is used when disks need to be shared between Linux servers. Basic configuration
is fairly simple, and this chapter will explain all the essential steps.
NFS Operation Overview
Linux data storage disks contain files stored in filesystems with a
standardized directory structure. New disks are added by attaching, or
mounting, the directories of their filesystems to a directory of an already
existing filesystem. This in effect makes the new hard disk transparently
appear to be a subdirectory of the filesystem to which it is attached.
NFS was developed to allow a computer
system to access directories on remote computers by mounting them on a local
filesystem as if they were a local disk. The systems administrator on the NFS
server has to define the directories that need to be activated, or exported,
for access by the NFS clients, and
administrators on the clients need to define both the NFS
server and the subset of its exported directories to use.
General NFS Rules
You should follow some general rules when configuring NFS.
- Only export directories beneath the / directory.
- Do not export a subdirectory of a directory that has already been exported. The exception being when the subdirectory is on a different physical device. Likewise, do not export the parent of a subdirectory unless it is on a separate device.
- Only export local filesystems.
Keep in mind that when you mount any filesystem on a directory, the original
contents of the directory are ignored, or obscured, in favor of the files in
the mounted filesystem. When the filesystem is unmounted, then the original
files in the directory reappear unchanged.
Key NFS Concepts
Data access over a network always introduces a variety of challenges,
especially if the operation is intended to be transparent to the user, as in
the case of NFS. Here are some key NFS
background concepts that will help in your overall understanding.
VFS
The virtual filesystem (VFS) interface is the mechanism used by NFS
to transparently and automatically redirect all access to NFS-mounted
files to the remote server. This is done in such a way that files on the remote
NFS server appear to the user to be no
different than those on a local disk.
VFS also translates these requests to match the actual filesystem format on
the NFS server's disks. This allows the NFS
server to be not only a completely different operating system, but also use
different naming schemes for files with different file attribute types.
Stateless Operation
Programs that read and write to files on a local filesystem rely on the
operating system to track their access location within the file with a pointer.
As NFS is a network-based file system, and
networks can be unreliable, it was decided that the NFS
client daemon would act as a failsafe intermediary between regular programs
running on the NFS client and the NFS
server.
Normally, when a server fails, file accesses timeout and the file pointers
are reset to zero. With NFS, the NFS
server doesn't maintain the file pointer information, the NFS
client does. This means that if an NFS
server suddenly fails, the NFS client can
precisely restart the file access once more after patiently waiting until the
server returns online.
Caching
NFS clients typically request more data
than they need and cache the results in memory locally so that further
sequential access of the data can be done locally versus over the network. This
is also known as a read ahead cache. Data that's to be written to the NFS
server is cached with the data being written to the server when the cache
becomes full. Caching therefore helps to reduce the amount of network traffic
while simultaneously improving the speed of some types of data access.
The NFS server caches information too,
such as the directory information for the most recently accessed files and a
read ahead cache for recently read files.
NFS And Symbolic Links
You have to be careful with the use of symbolic links on exported NFS
directories. If an absolute link points to a directory on the NFS
server that hasn't been exported, then the NFS
client won't be able to access it.
Unlike absolute links, relative symbolic links are interpreted relative to
the client's filesystem. Consider an example where the /data1 directory on the server
is mounted on the /data1 directory. If there is a link to the ../data2
directory on the NFS server and a directory
corresponding to ../data2 doesn't exist on the NFS
client, then an error will occur.
Also, mounting a filesystem on a symbolic link actually mounts the
filesystem on the target of the symbolic link. You'll have to be careful not to
obscure the contents of this original directory in the process. Plan carefully
before doing this.
NFS Background Mounting
NFS clients use the remote procedure call
(RPC) suite of network application helper programs to mount remote filesystems.
If the mount cannot occur during the default RPC timeout period, then the
client retries the mount process until the NFS
number of retires has been exceeded. The default is 10,000 minutes, which is
approximately a week. The difficulty here is that if the NFS
server is unavailable, the mount command will hang for a week until it returns
online. It is possible to use a bg option spawn the retries off as a subprocess
so that the main mount command can continue to process other requests.
Hard and Soft Mounts
The process of continuous retrying, whether in the background or foreground,
is called a hard mount. NFS attempts to
guarantee the consistency of your data with these constant retries. With soft
mounts, repeated RPC failures cause the NFS
operation to fail not hang and data consistency is therefore not guaranteed.
The advantage is that the operation completes quickly, whether it fails or not.
The disadvantage is that the use of the soft option implies that you are using
an unreliable NFS server, if this is the
case it is best not to place critical data that needs to be updated regularly
or executable programs in such a location.
NFS Versions
Three versions of NFS are available
currently: versions 2, 3, and 4. Version 1 was a prototype. This chapter
focuses on version 2, which:
- Supports files up to 4GB long
- Requires an NFS server to successfully write data to its disks before the write request is considered successful
- Has a limit of 8KB per read or write request.
The main differences with version 3 are that it
- Supports extremely large file sizes of up to 264 - 1 bytes
- Supports the NFS server data updates as being successful when the data is written to the server's cache
- Negotiates the data limit per read or write request between the client and server to a mutually decided optimal value.
Version 4 maintains many of version 3's features, but with the additions
that
- File locking and mounting are integrated in the NFS daemon and operate on a single, well known TCP port, making network security easier
- File locking is mandatory, whereas before it was optional
- Support for the bundling of requests from each client provides more efficient processing by the NFS server.
It is important to match the versions of NFS
running on clients and server to help ensure the necessary compatibility to get
NFS to work predictably.
Important NFS Daemons
NFS isn't a single program, but a suite
of interrelated programs that work together to get the job done.
- rpcbind: (portmap in older versions of Linux) The primary daemon upon which all the others rely, rpcbind manages connections for applications that use the RPC specification. By default, rpcbind listens to TCP port 111 on which an initial connection is made. This is then used to negotiate a range of TCP ports, usually above port 1024, to be used for subsequent data transfers. You need to run rpcbind on both the NFS server and client.
- nfs: Starts the RPC processes needed to serve shared NFS file systems. The nfs daemon needs to be run on the NFS server only.
- nfslock: Used to allow NFS clients to lock files on the server via RPC processes. The nfslock daemon needs to be run on both the NFS server and client.
- netfs: Allows RPC processes run on NFS clients to mount NFS filesystems on the server. The netfs daemon needs to be run on the NFS client only.
Now take a look at how to configure these daemons to create functional NFS
client/server peering.
Installing NFS
RedHat Linux installs nfs by default, and also by default nfs is activated
when the system boots. You can determine whether you have nfs installed using
the RPM command in conjunction with the grep
command to search for all installed nfs packages.
[root@bigboy tmp]# rpm -qa | grep nfs
redhat-config-nfs-1.1.3-1
nfs-utils-1.0.1-3.9
[root@bigboy tmp]#
A blank list means that you'll have to install the required packages.
You also need to have the RPC rpcbind package installed, and the rpm command
can tell you whether it's on your system already. When you use rpm in
conjunction with grep, you can determine all the rpcbind applications
installed:
[root@bigboy tmp]# rpm -q rpcbind
rpcbind-4.0-57
[root@bigboy tmp]#
A blank list means that you'll have to install the required packages.
If nfs and rpcbind are not installed, they can be added fairly easily once
you find the nfs-utils and rpcbind RPMs. (If you need a refresher, see Chapter
6, "Installing Linux
Software".) Remember that RPM
filenames usually start with the software's name and a version number, as in
nfs-utils-1.1.3-1.i386.rpm.
A Typical NFS Scenario
A small office has an old Linux server that is running out of disk space.
The office cannot tolerate any down time, even after hours, because the server
is accessed by overseas programmers and clients at nights and local ones by
day.
Budgets are tight and the company needs a quick solution until it can get a
purchase order approved for a hardware upgrade. Another Linux server on the
network has additional disk capacity in its /data partition and the office
would like to expand into it as an interim expansion NFS
server.
Configuring NFS on The Server
Both the NFS server and NFS
client have to have parts of the NFS package
installed and running. The server needs rpcbind, nfs, and nfslock operational,
as well as a correctly configured /etc/exports file. Here's how to do it.
The /etc/exports File
The /etc/exports file is the main NFS
configuration file, and it consists of two columns. The first column lists the
directories you want to make available to the network. The second column has
two parts. The first part lists the networks or DNS domains that can get access
to the directory, and the second part lists NFS
options in brackets.
For the scenario you need:
- Read-only access to the /data/files directory to all networks
- Read/write access to the /home directory from all servers on the 192.168.1.0 /24 network, which is all addresses from 192.168.1.0 to 192.168.1.255
- Read/write access to the /data/test directory from servers in the my-site.com DNS domain
- Read/write access to the /data/database directory from a single server 192.168.1.203.
In all cases, use the sync option to ensure that file data cached in memory
is automatically written to the disk after the completion of any disk data
copying operation.
#/etc/exports
/data/files *(ro,sync)
/home 192.168.1.0/24(rw,sync)
/data/test *.my-site.com(rw,sync)
/data/database 192.168.1.203/32(rw,sync)
After configuring your /etc/exports file, you need to activate the settings,
but first make sure that NFS is running
correctly.
Starting NFS on the Server
Configuring an NFS server is
straightforward:
1) Use the chkconfig command to configure the required nfs and RPC rpcbind
daemons to start at boot. You also should activate NFS
file locking to reduce the risk of corrupted data.
[root@bigboy tmp]# chkconfig --level 35 nfs on
[root@bigboy tmp]# chkconfig --level 35 nfslock on
[root@bigboy tmp]# chkconfig --level 35 rpcbind on
2) Use the init scripts in the /etc/init.d directory to start the nfs and
RPC rpcbind daemons. The examples use the start option, but when needed, you
can also stop and restart the processes with the stop and restart options.
[root@bigboy tmp]# service rpcbind start
[root@bigboy tmp]# service nfs start
[root@bigboy tmp]# service nfslock start
3) Test whether NFS is running correctly
with the rpcinfo command. You should get a listing of running RPC programs that
must include mountd, portmapper, nfs, and nlockmgr.
[root@bigboy tmp]# rpcinfo -p localhost
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100021 1 udp 1024 nlockmgr
100021 3 udp 1024 nlockmgr
100021 4 udp 1024 nlockmgr
100005 1 udp 1042 mountd
100005 1 tcp 2342 mountd
100005 2 udp 1042 mountd
100005 2 tcp 2342 mountd
100005 3 udp 1042 mountd
100005 3 tcp 2342 mountd
[root@bigboy tmp]#
Configuring NFS on The Client
NFS configuration on the client requires
you to start the NFS application; create a
directory on which to mount the NFS server's
directories that you exported via the /etc/exports file, and finally to mount
the NFS server's directory on your local
directory, or mount point. Here's how to do it all.
Starting NFS on the Client
Three more steps easily configure NFS on
the client.
1) Use the chkconfig command to configure the required nfs and RPC rpcbind
daemons to start at boot. Activate nfslock to lock the files and reduce the
risk of corrupted data.
[root@smallfry tmp]# chkconfig --level 35 netfs on
[root@smallfry tmp]# chkconfig --level 35 nfslock on
[root@smallfry tmp]# chkconfig --level 35 rpcbind on
2) Use the init scripts in the /etc/init.d directory to start the nfs and
RPC rpcbind daemons. As on the server, the examples use the start option, but
you can also stop and restart the processes with the stop and restart options.
[root@smallfry tmp]# service rpcbind start
[root@smallfry tmp]# service netfs start
[root@smallfry tmp]# service nfslock start
3) Test whether NFS is running correctly
with the rpcinfo command. The listing of running RPC programs you get must
include status, portmapper, and nlockmgr.
[root@smallfry root]# rpcinfo -p
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100024 1 udp 32768 status
100024 1 tcp 32768 status
100021 1 udp 32769 nlockmgr
100021 3 udp 32769 nlockmgr
100021 4 udp 32769 nlockmgr
100021 1 tcp 32769 nlockmgr
100021 3 tcp 32769 nlockmgr
100021 4 tcp 32769 nlockmgr
391002 2 tcp 32770 sgi_fam
[root@smallfry root]#
NFS And DNS
The NFS client must have a matching pair
of forward and reverse DNS entries on the DNS server used by the NFS
server. In other words, a DNS lookup on the NFS
server for the IP address of the NFS client
must return a server name that will map back to the original IP address when a
DNS lookup is done on that same server name.
[root@bigboy tmp]# host 192.168.1.102
201.1.168.192.in-addr.arpa domain name pointer 192-168-1-102.my-site.com.
[root@bigboy tmp]# host 192-168-1-102.my-site.com
192-168-1-102.my-site.com has address 192.168.1.102
[root@bigboy tmp]#
This is a security precaution added into the nfs package that lessens the
likelihood of unauthorized servers from gaining access to files on the NFS
server. Failure to correctly register your server IPs in DNS can result in
"fake hostname" errors:
Nov 7 19:14:40 bigboy rpc.mountd: Fake hostname smallfry.my-site.com for 192.168.1.1 - forward lookup doesn't exist
Accessing NFS Server Directories from the Client
In most cases, users want their NFS
directories to be permanently mounted. This requires an entry in the /etc/fstab
file in addition to the creation of the mount point directory.
The /etc/fstab File
The /etc/fstab file lists all the partitions that need to be auto-mounted
when the system boots. Therefore, you need to edit the /etc/fstab file if you
need the NFS directory to be made
permanently available to users on the NFS.
For the example, mount the /data/files directory on server bigboy (IP address
192.16801.100) as an NFS-type filesystem
using the local /mnt/nfs mount point directory.
#/etc/fstab
#Directory Mount Point Type Options Dump FSCK
192.168.1.100:/data/files /mnt/nfs nfs soft,nfsvers=2 0 0
This example used the soft and nfsvers options; Table 29.1 outlines these
and other useful NFS mounting options you
may want to use. See the NFS man pages for
more details.
Table 29.1 Possible NFS Mount Options
Option
|
Description
|
bg |
Retry mounting in the background if mounting initially
fails
|
fg |
Mount in the foreground
|
soft |
Use soft mounting
|
hard |
Use hard mounting
|
rsize=n
|
The amount of data NFS
will attempt to access per read operation. The default is dependent on the
kernel. For NFS version 2, set it to 8192
to assure maximum throughput.
|
wsize=n
|
The amount of data NFS
will attempt to access per write operation. The default is dependent on the
kernel. For NFS version 2, set it to 8192
to assure maximum throughput.
|
nfsvers=n
|
The version of NFS the
mount command should attempt to use
|
tcp |
Attempt to mount the filesystem using TCP
packets: the default is UDP.
|
intr |
If the filesystem is hard mounted and the mount times out,
allow for the process to be aborted using the usual methods such as CTRL-C
and the kill command.
|
The steps to mount the directory are fairly simple, as you'll see.
Permanently Mounting The NFS Directory
You'll now create a mount point directory, /mnt/nfs, on which to mount the
remote NFS directory and then use the mount
-a command activate the mount. Notice how before mounting there were no files
visible in the /mnt/nfs directory, this changes after the mounting is
completed:
[root@smallfry tmp]# mkdir /mnt/nfs
[root@smallfry tmp]# ls /mnt/nfs
[root@smallfry tmp]# mount -a
[root@smallfry tmp]# ls /mnt/nfs
ISO ISO-RedHat kickstart RedHat
[root@smallfry tmp]#
Each time your system boots, it reads the /etc/fstab file and executes the
mount -a command, thereby making this a permanent NFS
mount.
Note: There are multiple versions of NFS,
the most popular of which is version 2, which most NFS
clients use. Newer NFS servers may also be
able to handle NFS version 4. To be safe, it
is best to force the NFS server to export
directories as version 2 using the nfsvers=2 option in the /etc/fstab file as
shown in the example. Failure to do so may result in an error message.
[root@probe-001 tmp]# mount -a
mount to NFS server '192.168.1.100' failed: server is down.
[root@probe-001 tmp]#
Manually Mounting NFS File Systems
If you don't want a permanent NFS mount,
then you can use the mount command without the /etc/fstab entry to gain access
only when necessary. This is a manual process; for an automated process, seen
in the section "The NFS
automounter."
In this case, you're mounting the /data/files directory as an NFS-type
filesystem on the /mnt/nfs mount point. The NFS
server is bigboy whose IP address is 192.168.1.100.
Notice how before mounting there were no files visible in the /mnt/nfs
directory, this changes after the mounting is complete:
[root@smallfry tmp]# mkdir /mnt/nfs
[root@smallfry tmp]# ls /mnt/nfs
[root@smallfry tmp]# mount -t nfs 192.168.1.100:/data/files /mnt/nfs
[root@smallfry tmp]# ls /mnt/nfs
ISO ISO-RedHat kickstart RedHat
[root@smallfry tmp]#
Congratulations! You've made your first steps towards being an NFS
administrator.
Activating Modifications To The /etc/exports File
You can force your system to re-read the /etc/exports file by restarting NFS.
In a nonproduction environment, this may cause disruptions when an exported
directory suddenly disappears without prior notification to users. Here are
some methods you can use to update and activate the file with the least amount
of inconvenience to others.
New Exports File
When no directories have yet been exported to NFS,
use the exportfs -a command.
[root@bigboy tmp]# exportfs -a
Adding A Shared Directory To An Existing Exports File
When adding a shared directory, you can use the exportfs -r command to
export only the new entries.
[root@bigboy tmp]# exportfs -r
Deleting, Moving Or Modifying A Share
Removing an exported directory from the /etc/exports file requires work on
both the NFS client and server. The steps
are:
1) Unexport the mount point directory on the NFS
client using the umount command. In this case, you're unmounting the /mnt/nfs
mount point.
[root@smallfry tmp]# umount /mnt/nfs
Note: You may also need to edit the /etc/fstab file of any entries
related to the mount point if you want to make the change permanent even after
rebooting.
2) Comment out the corresponding entry in the NFS
server's /etc/exports file and reload the modified file.
[root@bigboy tmp]# exportfs -ua
[root@bigboy tmp]# exportfs -a
You have now completed a seamless removal of the exported directory with
much less chance of having critical errors.
The NFS Automounter
The permanent mounting of filesystems has its disadvantages. For example,
the /etc/fstab file is unique per Linux server and has to be individually
edited on each. NFS client management,
therefore, becomes more difficult. Also, the mount is permanent, tying up
system resources even when the NFS server
isn't being accessed.
NFS uses an automounter feature that
overcomes these shortcomings by allowing you to bypass the /etc/fstab file for NFS
mounts, instead using an NFS-specific map
file that can be distributed to multiple clients. In addition, you can use the
file to specify the expected duration of the NFS
mount, after which time it is unmounted automatically. However, automounter
continues to report to the operating system kernel that the mount is still
active. When the kernel makes an NFS file
request, automounter intercepts it and mounts the remote directory on the mount
point defined in the map file. The mount point directory is dynamically created
by the automounter when needed, after the timeout period the remote directory
is unmounted and the mount point is deleted.
Automounter Map Files
The master map file of automounter has a simple format that defines the name
of the mount point directory in the first column and the subsidiary map file
that controls its mounting in the second. You can add mounting options to a
third column.
In the example, the /home directory needs to be automounted on an NFS
server and the configuration information is defined in the /etc/auto.home file.
Finally, the mount will only last for five minutes (300 seconds), and this
value will act as a default for all the entries in the /etc/auto.home file.
Irregular entries that don't match /home are placed in the /etc/auto.direct
file.
#
# File: /etc/auto.master
#
/home /etc/auto.home --timeout=300
/- /etc/auto.direct
Direct Maps
Direct maps are used to define NFS
filesystems that are mounted on different servers or that all don't start with
the same prefix.
Indirect Maps
Indirect maps define directories that can be mounted under the same mount
point. A good example would be all the users' directories under /home.
Note: Based on preliminary testing, an early release of Fedora Core 3
doesn't appear to work correctly with automounter. You have to have one
indirect map defined to avoid startup errors, and after doing so, the maps
don't appear to be activated. No errors occur in the logs either.
The Structure Of Direct And Indirect Map Files
The format of these map files is similar to that of the /etc/auto.master
file, except that columns two and three have been switched.
Column one lists all the directory keys that will activate the automounter
feature. It is also the name of the mount point under the directory listed in
the /etc/auto.master file. The second column provides all the NFS
options to be used, and the third column lists the NFS
servers and the filesystems that map to the keys.
When the NFS client accesses a file, it
refers to the keys in the /etc/auto.master file to see whether any fall within
the realm of the automounter's responsibility. If one does, then automounter
checks the subsidiary map file for subdirectory mount point key. If it finds
one, then automounter mounts the files for the system.
Indirect Map File Example
In the previous example, the /etc/auto.master file redirected all references
to the /home directory to the /etc/auto.home file. This second file has entries
for peter, bob, and bunny; these directories are actually mount points for
directories on servers bigboy, ochorios, and waitabit.
#
# File: /etc/auto.home
#
peter bigboy:/home/peter
bob ochorios:/home/bob
bunny waitabit:/home/bunny
Direct Map File Example
The second entry in the /etc/auto.master file was specifically created to
handle all references to one of a kind directory prefixes. In the example the
/data/sales and /sql/database are the mount points for directories on servers
bigboy and waitabit.
#
# File: /etc/auto.direct
#
/data/sales -rw bigboy:/disk1/data/sales
/sql/database -ro,soft waitabit:/var/mysql/database
Note: The automounter treats direct mounts as if they were files in a
directory, not as individual directories. This means all direct mount points in
the same directory are mounted simultaneously even if only one of them is being
accessed. This can cause excessive mounting activity that can slow response
times. There are tricks you can use to avoid this, perhaps the simplest is just
to place direct mount points in different directories.
Note: Direct map entries in the /etc/auto.master file must all begin
with /-, and you can use absolute path names with direct map files only, if you
don't then you'll get an error like this in your /var/log/messages file:
Nov 7 19:24:12 smallfry automount[31801]: bad map format: found indirect, expected direct exiting
Wildcards In Map Files
You can use two types of wildcards in a map file. The asterisk (*), which
means all, and the ampersand (&), which instructs automounter to substitute
the value of the key for the & character.
Using the Ampersand Wildcard
In the example below, the key is peter, so the ampersand wildcard is
interpreted to mean peter too. This means you'll be mounting the
bigboy:/home/peter directory.
#
# File: /etc/auto.home
#
peter bigboy:/home/&
Using the Asterisk Wildcard
In the example below, the key is *, meaning that automounter will attempt to
mount any attempt to enter the /home directory. But what's the value of the
ampersand? It is actually assigned the value of the key that triggered the
access to the /etc/auto.home file. If the access was for /home/peter, then the
ampersand is interpreted to mean peter, and bigboy:/home/peter is mounted. If
access was for /home/bob, then bigboy:/home/bob would be mounted.
#
# File: /etc/auto.home
#
* bigboy:/home/&
Starting Automounter
Fedora Linux installs the automounter RPM,
called autofs, by default . Here are some quick steps to get automounter
started.
1) Use the chkconfig command to configure the automounter daemons to start
at boot. Activate NFS file locking to reduce
the risk of corrupted data.
[root@bigboy tmp]# chkconfig autofs on
2) Use the init scripts in the /etc/init.d directory to start the
automounter daemons. The example uses the start option, but you can also stop
and restart the process with the stop and restart options.
[root@bigboy tmp]# service autofs start
3) Use the pgrep command to determine whether automounter is running. If it
is, the command will return the process ID of the automount daemon.
[root@bigboy tmp]# pgrep automount
32261
[root@bigboy tmp]#
As you can see, managing the startup of automounter is very similar to that
of other Linux applications and should be easy to remember.
Automounter Examples
Now that you understand NFS automounter,
you may benefit from an example. Chapter 30, "Configuring NIS",
contains a full scenario in which a school computer laboratory uses automounter
to centrally house all the home directories of its students. Additional
centralization is also achieved by using NIS
for login authentication, access, and accounting control.
Troubleshooting NFS
A basic NFS configuration usually works
without problems when the client and server are on the same network. The most
common problems are caused by forgetting to start NFS,
to edit the /etc/fstab file, or to export the /etc/exports file. Another common
cause of failure is the iptables firewall daemon running on either the server
or client without the administrator realizing it.
When the client and server are on different networks, these checks still
apply, but you'll also have to make sure basic connectivity has been taken care
of as outlined in Chapter 4, "Simple Network
Troubleshooting". Sometimes a firewall being present on the path
between the client and server can cause difficulties.
As always, no troubleshooting plan would be complete without frequent
reference to the /var/log/messages file when searching for additional clues.
Table 29.2 shows some common NFS errors
you'll encounter.
Table 29.2 Some Common NFS Error Messages
Error
|
Description
|
Too many levels of remote in path
|
Attempting to mount a filesystem that has already been
mounted.
|
Permission denied
|
User is denied access. This could be the client's root
user who has unprivileged status on the server due to the root_squash
option. Could also be because the user on the client doesn't exist on the
server.
|
No such host
|
Typographical or DNS configuration error in the name of
the server.
|
No such file or directory
|
Typographical error in the name of the file or directory:
they don't exist.
|
NFS server is not
responding
|
The server could be overloaded or down.
|
Stale file handle
|
A file that was previously accessed by the client was
deleted on the server before the client closed it.
|
Fake hostname
|
Forward and reverse DNS entries don't exist for the NFS
client.
|
The showmount Command
When run on the server, the showmount -a command lists all the currently
exported directories. It also shows a list of NFS
clients accessing the server, in this case one client has an IP address of
192.168.1.102.
[root@bigboy tmp]# showmount -a
All mount points on bigboy:
*:/home
192.168.1.102:*
[root@bigboy tmp]#
The "df" Command
The df command lists the disk usage of a mounted filesystem. Run it on the NFS
client to verify that NFS mounting has
occurred. In many cases, the root_squash mount option will prevent the root
user from doing this, so it's best to try it as an unprivileged user.
[nfsuser@smallfry nfsuser]$ df -F nfs
Filesystem 1K-blocks Used Available Use% Mounted on
192.168.1.100:/home/nfsuser
1032056 346552 633068 36% /home/nfsuser
[nfsuser@smallfry nfsuser]$
The nfsstat Command
The nfsstat command provides useful error statistics. The -s option provides NFS server stats, while the -c option provides them of for clients. Other NFS Considerations
Security
NFS and rpcbind have had a number of
known security deficiencies in the past. As a result, I don't recommended using
NFS over insecure networks. NFS
doesn't encrypt data and it is possible for root users on NFS
clients to have root access the server's filesystems. You can exercise
security-related caution with NFS by
following a few guidelines:
- Restrict its use to secure networks
- Export only the most needed data
- Consider using read-only exports whenever data updates aren't necessary.
- Use the root_squash option in /etc/exports file (default) to reduce the risk of the possibility of a root user on the NFS client having root file permission access on the NFS server. This is normally an undesirable condition, especially if the NFS client and NFS server are being managed by different sets of administrators.
- Make NFS run on predefined TCP and UDP ports as this makes the management of firewall rules much easier. This can be done by editing your /etc/sysconfig/nfs file. In this example we are using ports 4000 to 4003 for the lockd, mountd, statd and rquotad daemons that are a part of NFS. The other NFS components, rpcbind and nfs, always run on TCP/UDP ports 111 and 2049 respectively.
# File: /etc/sysconfig/nfs file
RQUOTAD_PORT=4003
LOCKD_TCPPORT=4001
LOCKD_UDPPORT=4001
MOUNTD_PORT=4002
STATD_PORT=4000
These points should be the foundation of your NFS
security policy, however, the list isn't comprehensive due to the concise scope
of this book. I'd suggest that you refer to a dedicated NFS
reference for more detailed advice.
NFS Hanging
As stated before, if the NFS server
fails, the NFS client waits indefinitely for
it to return. This also forces programs relying on the same client server
relationship to wait indefinitely too.
For this reason, use the soft option in the NFS
client's /etc/fstab file. This causes NFS to
report an I/O error to the calling program after a long timeout.
You can reduce the risk of NFS hanging by
taking a number of precautions:
- Run NFS on a reliable network.
- Avoid having NFS servers that NFS mount each other's filesystems or directories.
- Always use the sync option whenever possible.
- Do not have mission-critical computers rely on an NFS server to operate, unless the server's reliability can be guaranteed.
- Do not include NFS-mounted directories as part of your search path, because a hung NFS connection to a directory in your search path could cause your shell to pause at that point in the search path until the NFS session is regained.
File Locking
NFS allows multiple clients to mount the
same directory, but NFS has a history of not
handling file locking well, although more recent versions are said to have
rectified the problem. Test your network-based applications thoroughly before
considering using NFS.
Nesting Exports
NFS doesn't allow you to export directories
that are subdirectories of directories that have already been exported unless
they are on different partitions.
Limiting root Access
NFS doesn't allow a root user on a NFS
client to have root privileges on the NFS
server. This can be disabled with the no_root_squash export option in the
/etc/exports file. This is normally an undesirable condition, especially if the
NFS client and NFS
server are being managed by different sets of administrators.
Restricting Access to the NFS server
NFS doesn't provide restrictions on a
per-user basis. If a user named nfsuser exists on the NFS
client, then they will have access to all the files of a user named nfsuser on
the NFS server. It is best, therefore, to
use the /etc/exports file to limit access to certain trusted servers or
networks.
You may also want to use a firewall to protect access to the NFS
server. A main communication control channel is usually created between the
client and server on TCP port 111, but the
data is frequently transferred on a randomly chosen TCP
port negotiated between them. There are ways to limit the TCP
ports used, but that is beyond the scope of this book.
You may also want to eliminate any wireless networks between your NFS
server and client, and it is not wise to mount an NFS
share across the Internet as access could be either slow, intermittent or
insecure.
File Permissions
The NFS file permissions on the NFS
server are inherited by the client. It can become complicated especially if the
users and user groups on the NFS client that
are expected to access data on the NFS
server don't exist on the NFS server.
For simplicity, make the key users and groups on both systems match and make
sure the permissions on the NFS client mount
point and the exported directories of the NFS
server are in keeping with the your operational objectives.
Conclusion
As you have seen NFS can be a very
powerful tool in providing clients with access to large amounts of data, such
as a database stored on a centralized server. Many of the new network-attached
storage products currently available on the market rely on NFS
- a testament to its popularity, increasing stability, and improving security.
Comments
Post a Comment