Quick HOWTO : Ch29 : Remote Disk Access with NFS/zh

来自Ubuntu中文
跳到导航跳到搜索

{{#ifexist: :Quick HOWTO : Ch29 : Remote Disk Access with NFS/zh/zh | | {{#ifexist: Quick HOWTO : Ch29 : Remote Disk Access with NFS/zh/zh | | {{#ifeq: {{#titleparts:Quick HOWTO : Ch29 : Remote Disk Access with NFS/zh|1|-1|}} | zh | | }} }} }} {{#ifeq: {{#titleparts:Quick HOWTO : Ch29 : Remote Disk Access with NFS/zh|1|-1|}} | zh | | }}


介绍

当您想在装有Linux和Windows的计算机之间共享磁盘空间的时候,Samba通常是您可选择的解决方案。当磁盘需要在Linux服务器之间共享时,网络文件系统(NFS)协议就会被调用. 基本的配置是比较简单的,本章将要讲述配置的关键步骤。

NFS操作概要

Linux的数据存储磁盘包含存储于标准目录结构文件系统中的文件夹。附加的磁盘通过附接或者挂载的方式加载。它们的文件系统将被转换为已存在于计算机磁盘的文件系统。这实际上使得被加载硬盘出现在它所挂载的文件系统的子目录中。 NFS允许计算机系统通过将远程计算机挂载到本地文件系统,使得能够像访问本地磁盘一样访问远程计算机。为了能够使NFS客户端访问,NFS服务器管理员需要指定被激活或者被调出的文件目录,同时NFS客户端管理员需要指定NFS服务器与其要被调出的目录的子集。为了能使NFS客户访问,NFS服务器的系统管理员需要指定被激活或者载出的目录,而且客户端的管理员需要指定NFS服务器和被载出目录的子目录。

NFS通用规则

在配置NFS时你需要遵循一些通用规则。

  1. 仅仅在 / 目录下载出目录。
  2. 不允许在已经载出目录下载出自目录。当子目录位于另一个物理设备的时是允许例外的。同样的,不允许载出一个子目录的上一级目录,除非它在另一个独立的设备。
  3. 只载出本地文件系统。

记住:当你加载新的文件系统到某一个目录时,原本在该目录下的文件、子目录是被忽略的,只有新的文件系统会显示在该目录下。当新的文件系统被卸载时,原来的文件、子目录会重新出现在该目录下,没有任何变化。

NFS关键概念

跨网络的数据访问常常导致一系列的挑战,尤其是如果网络尝试对用户透明,比如NFS。这里有些关键的NFS背景概念,以帮助您有一个全面的了解。

VFS

虚拟文件系统(VFS)接口是NFS采用的,透明并且自动地对远程服务器上加载的NFS文件访问作重新编辑的一种机制。虚拟文件系统使得访问远程文件与访问本地文件没有区别。

VFS也传递这些请求,使得与NFS服务器磁盘格式匹配。这使得NFS服务器并不需要是一个完全不同的操作系统,而是以不同的属性去定义文件夹。

Stateless Operation

程序在本地文件系统上读写依靠操作系统的指针追踪他们的访问地点。由于NFS是基于网络的文件系统,以及网络的不可靠,所以决定了NFS用户镜像作为程序运行在NFS客户端和NFS服务器的中介。

通常当服务器失败,访问文件超时并且文件指针归零。NFS服务器并不保存文件指针信息,而NFS客户端保存。这就意味着万一NFS服务器突然失败,NFS客户端能在服务器重新连接以后再次准确的重新进行文件访问。

缓冲

NFS客户端通常获取比它们所需要的更多的数据,并把这些数据缓存在本地,因此对于数据的进一步访问可以从本地开始而不需要从网络上开始。这称作为预读缓存。当缓存将要满时,将要被写入NFS服务器的数据将和正在写入NFS服务器的数据一起放入缓存。因此缓存降低了网络总流量,并同时提高了某些类型的数据访问速度。

NFS服务器同样也缓存信息,诸如最近访问的文件夹信息和最新被读取文件的预读。

NFS和符号性链接

你必须小心使用载出NFS目录上的符号性链接。如果一个绝对的链接指向一个没有被载出的NFS服务器的文件目录,那么NFS客户端将不能对该目录进行访问。

与绝对链接不同,相对符号链接将以相对形式解析给客户端文件系统。考虑这样一个例子:位于服务器的/data1文件目录加载在/data1目录下。如果有一个链接指向NFS服务器的../data2目录,但是与../data2目录相应的文件夹不存在,那么就会发生错误。同样地,将一个文件系统加载到一个符号性链接就是将文件系统加载到符号性链接目标下。在此过程中你需要谨慎,不要忽略符号链接的原始目录。在做之前请仔细计划。

NFS下的加载

NFS客户使用网络应用程序帮助中的远程程序呼叫(RPC)加载远程文件系统。如果默认RPC超时导致未能加载,客户端会继续尝试重新加载,直到超过最大尝试次数。默认的超时时间为10,000分钟,大概是一个星期的时间。困难之处在于,如果NFS服务器无法读取,加载命令将会持续一个星期,知道服务器重新在线。用户可以使用bg选项终止重试,使得主加载命令能够继续执行其他请求。

硬加载与软加载

不论位于后台或者前台,不断进行重试的加载被称作硬加载。网络文件系统(NFS)不断进行重试,以保证数据的一致性。对于软加载,重复的远程程序呼叫(RPC)失败将导致网络文件系统(NFS)操作失败同时也不能保证数据的一致性。不论加载的失败与否,软加载的优点是操作的速度快。缺点是软加载的选择意味着你可能使用的是不可靠的NFS服务器,如果是这样的话,最好不要把需要更新的关键数据放在那台服务器上。

NFS版本

目前有三种可获取的NFS版本:第二、三、四版。第一版是原型版本。本章重点放在第二版。第二版:

  • 最高支持4GB文件
  • 在写请求成功确认前需要NFS服务器成功在磁盘上写入数据
  • 每次读写有8KB的限制

与第三版的主要区别:

  • Supports extremely large file sizes of up to 264 - 1 bytes
  • Supports the NFS server data updates as being successful when the data is written to the server's cache
  • Negotiates the data limit per read or write request between the client and server to a mutually decided optimal value.

Version 4 maintains many of version 3's features, but with the additions that

  • File locking and mounting are integrated in the NFS daemon and operate on a single, well known TCP port, making network security easier
  • File locking is mandatory, whereas before it was optional
  • Support for the bundling of requests from each client provides more efficient processing by the NFS server.

It is important to match the versions of NFS running on clients and server to help ensure the necessary compatibility to get NFS to work predictably.

几个重要的NFS后台程序

不是一个独立的程序,它是对一系列互相关联的一起工作的后台程序的总称:

  • portmap: The primary daemon upon which all the others rely主要的后台程序,其他程序都要依赖于它, portmap程序管理那些用RPC调用的程序的连接。portmap默认监听TCP端口111,这是一个默认启动的端口。然后portmap程序会分配一些TCP端口用来传输接下来的数据,这些端口号通常是大于1024的。 你必须在NFS的服务器和客户机上同时运行protmap程序。
  • nfs: 开启RPC进程用来共享NFS文件系统,你只需要在NFS服务器上运行这个程序。
  • nfslock: 用来允许NFS客户端通过RPC进程锁定在服务器上的文件。你必须在NFS的服务器和客户机上同时运行这个程序
  • netfs:允许运行在客户机上的RPC进程mount 服务器上的NFS文件系统。你只需要在NFS客户端上运行这个程序。

Now take a look at how to configure these daemons to create functional NFS client/server peering.现在让我们看一下怎样配置这些程序来创建基本的NFS 客户机/服务器 架构。

Installing NFS

RedHat Linux installs nfs by default, and also by default nfs is activated when the system boots. You can determine whether you have nfs installed using the RPM command in conjunction with the grep command to search for all installed nfs packages.

[root@bigboy tmp]# rpm -qa | grep nfs
redhat-config-nfs-1.1.3-1
nfs-utils-1.0.1-3.9
[root@bigboy tmp]#

A blank list means that you'll have to install the required packages.

You also need to have the RPC portmap package installed, and the rpm command can tell you whether it's on your system already. When you use rpm in conjunction with grep, you can determine all the portmap applications installed:

[root@bigboy tmp]# rpm -q portmap
portmap-4.0-57
[root@bigboy tmp]#

A blank list means that you'll have to install the required packages.

If nfs and portmap are not installed, they can be added fairly easily once you find the nfs-utils and portmap RPMs. (If you need a refresher, see Chapter 6, "Installing Linux Software".) Remember that RPM filenames usually start with the software's name and a version number, as in nfs-utils-1.1.3-1.i386.rpm.

Scenario

A small office has an old Linux server that is running out of disk space. The office cannot tolerate any down time, even after hours, because the server is accessed by overseas programmers and clients at nights and local ones by day.

Budgets are tight and the company needs a quick solution until it can get a purchase order approved for a hardware upgrade. Another Linux server on the network has additional disk capacity in its /data partition and the office would like to expand into it as an interim expansion NFS server.

Configuring NFS on The Server

Both the NFS server and NFS client have to have parts of the NFS package installed and running. The server needs portmap, nfs, and nfslock operational, as well as a correctly configured /etc/exports file. Here's how to do it.

The /etc/exports File

The /etc/exports file is the main NFS configuration file, and it consists of two columns. The first column lists the directories you want to make available to the network. The second column has two parts. The first part lists the networks or DNS domains that can get access to the directory, and the second part lists NFS options in brackets.

For the scenario you need:

  • Read-only access to the /data/files directory to all networks
  • Read/write access to the /home directory from all servers on the 192.168.1.0 /24 network, which is all addresses from 192.168.1.0 to 192.168.1.255
  • Read/write access to the /data/test directory from servers in the my-site.com DNS domain
  • Read/write access to the /data/database directory from a single server 192.168.1.203.

In all cases, use the sync option to ensure that file data cached in memory is automatically written to the disk after the completion of any disk data copying operation.

#/etc/exports
/data/files           *(ro,sync)
/home                 192.168.1.0/24(rw,sync)
/data/test            *.my-site.com(rw,sync)
/data/database        192.168.1.203/32(rw,sync)

After configuring your /etc/exports file, you need to activate the settings, but first make sure that NFS is running correctly.

Starting NFS on the Server

Configuring an NFS server is straightforward:

1) Use the chkconfig command to configure the required nfs and RPC portmap daemons to start at boot. You also should activate NFS file locking to reduce the risk of corrupted data.

[root@bigboy tmp]# chkconfig --level 35 nfs on
[root@bigboy tmp]# chkconfig --level 35 nfslock on 
[root@bigboy tmp]# chkconfig --level 35 portmap on

2) Use the init scripts in the /etc/init.d directory to start the nfs and RPC portmap daemons. The examples use the start option, but when needed, you can also stop and restart the processes with the stop and restart options.

[root@bigboy tmp]# service portmap start
[root@bigboy tmp]# service nfs start
[root@bigboy tmp]# service nfslock start

3) Test whether NFS is running correctly with the rpcinfo command. You should get a listing of running RPC programs that must include mountd, portmapper, nfs, and nlockmgr.

[root@bigboy tmp]# rpcinfo -p localhost
   program vers proto   port
    100000    2   tcp    111  portmapper
    100000    2   udp    111  portmapper
    100003    2   udp   2049  nfs
    100003    3   udp   2049  nfs
    100021    1   udp   1024  nlockmgr
    100021    3   udp   1024  nlockmgr
    100021    4   udp   1024  nlockmgr
    100005    1   udp   1042  mountd
    100005    1   tcp   2342  mountd
    100005    2   udp   1042  mountd
    100005    2   tcp   2342  mountd
    100005    3   udp   1042  mountd
    100005    3   tcp   2342  mountd
[root@bigboy tmp]#

Configuring NFS on The Client

NFS configuration on the client requires you to start the NFS application; create a directory on which to mount the NFS server's directories that you exported via the /etc/exports file, and finally to mount the NFS server's directory on your local directory, or mount point. Here's how to do it all.

Starting NFS on the Client

Three more steps easily configure NFS on the client.

1) Use the chkconfig command to configure the required nfs and RPC portmap daemons to start at boot. Activate nfslock to lock the files and reduce the risk of corrupted data.

[root@smallfry tmp]# chkconfig --level 35 netfs on
[root@smallfry tmp]# chkconfig --level 35 nfslock on
[root@smallfry tmp]# chkconfig --level 35 portmap on

2) Use the init scripts in the /etc/init.d directory to start the nfs and RPC portmap daemons. As on the server, the examples use the start option, but you can also stop and restart the processes with the stop and restart options.

[root@smallfry tmp]# service portmap start
[root@smallfry tmp]# service netfs start
[root@smallfry tmp]# service nfslock start

3) Test whether NFS is running correctly with the rpcinfo command. The listing of running RPC programs you get must include status, portmapper, and nlockmgr.

[root@smallfry root]# rpcinfo -p
   program vers proto   port
    100000    2   tcp    111  portmapper
    100000    2   udp    111  portmapper
    100024    1   udp  32768  status
    100024    1   tcp  32768  status
    100021    1   udp  32769  nlockmgr
    100021    3   udp  32769  nlockmgr
    100021    4   udp  32769  nlockmgr
    100021    1   tcp  32769  nlockmgr
    100021    3   tcp  32769  nlockmgr
    100021    4   tcp  32769  nlockmgr
    391002    2   tcp  32770  sgi_fam
[root@smallfry root]#

NFS And DNS

The NFS client must have a matching pair of forward and reverse DNS entries on the DNS server used by the NFS server. In other words, a DNS lookup on the NFS server for the IP address of the NFS client must return a server name that will map back to the original IP address when a DNS lookup is done on that same server name.

[root@bigboy tmp]# host 192.168.1.102
201.1.168.192.in-addr.arpa domain name pointer 192-168-1-102.my-site.com.
[root@bigboy tmp]# host 192-168-1-102.my-site.com
192-168-1-102.my-site.com has address 192.168.1.102
[root@bigboy tmp]#

This is a security precaution added into the nfs package that lessens the likelihood of unauthorized servers from gaining access to files on the NFS server. Failure to correctly register your server IPs in DNS can result in "fake hostname" errors:

Nov  7 19:14:40 bigboy rpc.mountd: Fake hostname smallfry.my-site.com for 192.168.1.1 - forward lookup doesn't exist

Making NFS Mounting Permanent

In most cases, users want their NFS directories to be permanently mounted. This requires an entry in the /etc/fstab file in addition to the creation of the mount point directory.

The /etc/fstab File

The /etc/fstab file lists all the partitions that need to be auto-mounted when the system boots. Therefore, you need to edit the /etc/fstab file if you need the NFS directory to be made permanently available to users on the NFS. For the example, mount the /data/files directory on server bigboy (IP address 192.16801.100) as an NFS-type filesystem using the local /mnt/nfs mount point directory.

#/etc/fstab
#Directory                   Mount Point    Type   Options       Dump   FSCK
192.168.1.100:/data/files   /mnt/nfs        nfs    soft,nfsvers=2  0      0

This example used the soft and nfsvers options; Table 29.1 outlines these and other useful NFS mounting options you may want to use. See the NFS man pages for more details.

Table 29.1 Possible NFS Mount Options

Option

Description

bg

Retry mounting in the background if mounting initially fails

fg

Mount in the foreground

soft

Use soft mounting

hard

Use hard mounting

rsize=n

The amount of data NFS will attempt to access per read operation. The default is dependent on the kernel. For NFS version 2, set it to 8192 to assure maximum throughput.

wsize=n

The amount of data NFS will attempt to access per write operation. The default is dependent on the kernel. For NFS version 2, set it to 8192 to assure maximum throughput.

nfsvers=n

The version of NFS the mount command should attempt to use

tcp

Attempt to mount the filesystem using TCP packets: the default is UDP.

intr

If the filesystem is hard mounted and the mount times out, allow for the process to be aborted using the usual methods such as CTRL-C and the kill command.

The steps to mount the directory are fairly simple, as you'll see.

Permanently Mounting The NFS Directory

You'll now create a mount point directory, /mnt/nfs, on which to mount the remote NFS directory and then use the mount -a command activate the mount. Notice how before mounting there were no files visible in the /mnt/nfs directory, this changes after the mounting is completed:

[root@smallfry tmp]# mkdir /mnt/nfs
[root@smallfry tmp]# ls /mnt/nfs
[root@smallfry tmp]# mount -a
[root@smallfry tmp]# ls /mnt/nfs
ISO  ISO-RedHat  kickstart  RedHat
[root@smallfry tmp]#

Each time your system boots, it reads the /etc/fstab file and executes the mount -a command, thereby making this a permanent NFS mount.

Note: There are multiple versions of NFS, the most popular of which is version 2, which most NFS clients use. Newer NFS servers may also be able to handle NFS version 4. To be safe, it is best to force the NFS server to export directories as version 2 using the nfsvers=2 option in the /etc/fstab file as shown in the example. Failure to do so may result in an error message.

[root@probe-001 tmp]# mount -a
mount to NFS server '192.168.1.100' failed: server is down.
[root@probe-001 tmp]#

Manually Mounting NFS File Systems

If you don't want a permanent NFS mount, then you can use the mount command without the /etc/fstab entry to gain access only when necessary. This is a manual process; for an automated process, seen in the section "The NFS automounter."

In this case, you're mounting the /data/files directory as an NFS-type filesystem on the /mnt/nfs mount point. The NFS server is bigboy whose IP address is 192.168.1.100.

Notice how before mounting there were no files visible in the /mnt/nfs directory, this changes after the mounting is complete:

[root@smallfry tmp]# mkdir /mnt/nfs
[root@smallfry tmp]# ls /mnt/nfs
[root@smallfry tmp]# mount -t nfs 192.168.1.100:/data/files /mnt/nfs
[root@smallfry tmp]# ls /mnt/nfs
ISO  ISO-RedHat  kickstart  RedHat
[root@smallfry tmp]#

Congratulations! You've made your first steps towards being an NFS administrator.

Activating Modifications To The /etc/exports File

You can force your system to re-read the /etc/exports file by restarting NFS. In a nonproduction environment, this may cause disruptions when an exported directory suddenly disappears without prior notification to users. Here are some methods you can use to update and activate the file with the least amount of inconvenience to others.

New Exports File

When no directories have yet been exported to NFS, use the exportfs -a command.

[root@bigboy tmp]# exportfs -a

Adding A Shared Directory To An Existing Exports File

When adding a shared directory, you can use the exportfs -r command to export only the new entries.

[root@bigboy tmp]# exportfs -r

Deleting, Moving Or Modifying A Share

Removing an exported directory from the /etc/exports file requires work on both the NFS client and server. The steps are:

1) Unexport the mount point directory on the NFS client using the umount command. In this case, you're unmounting the /mnt/nfs mount point.

[root@smallfry tmp]# umount /mnt/nfs

Note: You may also need to edit the /etc/fstab file of any entries related to the mount point if you want to make the change permanent even after rebooting.

2) Comment out the corresponding entry in the NFS server's /etc/exports file and reload the modified file.

[root@bigboy tmp]# exportfs -ua
[root@bigboy tmp]# exportfs -a

You have now completed a seamless removal of the exported directory with much less chance of having critical errors.

The NFS Automounter

The permanent mounting of filesystems has its disadvantages. For example, the /etc/fstab file is unique per Linux server and has to be individually edited on each. NFS client management, therefore, becomes more difficult. Also, the mount is permanent, tying up system resources even when the NFS server isn't being accessed.

NFS uses an automounter feature that overcomes these shortcomings by allowing you to bypass the /etc/fstab file for NFS mounts, instead using an NFS-specific map file that can be distributed to multiple clients. In addition, you can use the file to specify the expected duration of the NFS mount, after which time it is unmounted automatically. However, automounter continues to report to the operating system kernel that the mount is still active. When the kernel makes an NFS file request, automounter intercepts it and mounts the remote directory on the mount point defined in the map file. The mount point directory is dynamically created by the automounter when needed, after the timeout period the remote directory is unmounted and the mount point is deleted.

Automounter Map Files

The master map file of automounter has a simple format that defines the name of the mount point directory in the first column and the subsidiary map file that controls its mounting in the second. You can add mounting options to a third column.

In the example, the /home directory needs to be automounted on an NFS server and the configuration information is defined in the /etc/auto.home file. Finally, the mount will only last for five minutes (300 seconds), and this value will act as a default for all the entries in the /etc/auto.home file.

Irregular entries that don't match /home are placed in the /etc/auto.direct file.

#
# File: /etc/auto.master
#
/home   /etc/auto.home --timeout=300
/-      /etc/auto.direct

Direct Maps

Direct maps are used to define NFS filesystems that are mounted on different servers or that all don't start with the same prefix.

Indirect Maps

Indirect maps define directories that can be mounted under the same mount point. A good example would be all the users' directories under /home.

Note: Based on preliminary testing, an early release of Fedora Core 3 doesn't appear to work correctly with automounter. You have to have one indirect map defined to avoid startup errors, and after doing so, the maps don't appear to be activated. No errors occur in the logs either.

The Structure Of Direct And Indirect Map Files

The format of these map files is similar to that of the /etc/auto.master file, except that columns two and three have been switched.

Column one lists all the directory keys that will activate the automounter feature. It is also the name of the mount point under the directory listed in the /etc/auto.master file. The second column provides all the NFS options to be used, and the third column lists the NFS servers and the filesystems that map to the keys.

When the NFS client accesses a file, it refers to the keys in the /etc/auto.master file to see whether any fall within the realm of the automounter's responsibility. If one does, then automounter checks the subsidiary map file for subdirectory mount point key. If it finds one, then automounter mounts the files for the system.

Indirect Map File Example

In the previous example, the /etc/auto.master file redirected all references to the /home directory to the /etc/auto.home file. This second file has entries for peter, bob, and bunny; these directories are actually mount points for directories on servers bigboy, ochorios, and waitabit.

#
# File: /etc/auto.home
#
peter   bigboy:/home/peter
bob     ochorios:/home/bob
bunny   waitabit:/home/bunny

Direct Map File Example

The second entry in the /etc/auto.master file was specifically created to handle all references to one of a kind directory prefixes. In the example the /data/sales and /sql/database are the mount points for directories on servers bigboy and waitabit.

#
# File: /etc/auto.direct
#
/data/sales          -rw           bigboy:/disk1/data/sales
/sql/database        -ro,soft       waitabit:/var/mysql/database

Note: The automounter treats direct mounts as if they were files in a directory, not as individual directories. This means all direct mount points in the same directory are mounted simultaneously even if only one of them is being accessed. This can cause excessive mounting activity that can slow response times. There are tricks you can use to avoid this, perhaps the simplest is just to place direct mount points in different directories.

Note: Direct map entries in the /etc/auto.master file must all begin with /-, and you can use absolute path names with direct map files only, if you don't then you'll get an error like this in your /var/log/messages file:

Nov  7 19:24:12 smallfry automount[31801]: bad map format: found indirect, expected direct exiting

Wildcards In Map Files

You can use two types of wildcards in a map file. The asterisk (*), which means all, and the ampersand (&), which instructs automounter to substitute the value of the key for the & character.

Using the Ampersand Wildcard

In the example below, the key is peter, so the ampersand wildcard is interpreted to mean peter too. This means you'll be mounting the bigboy:/home/peter directory.

#
# File: /etc/auto.home
#
peter   bigboy:/home/&

Using the Asterisk Wildcard

In the example below, the key is *, meaning that automounter will attempt to mount any attempt to enter the /home directory. But what's the value of the ampersand? It is actually assigned the value of the key that triggered the access to the /etc/auto.home file. If the access was for /home/peter, then the ampersand is interpreted to mean peter, and bigboy:/home/peter is mounted. If access was for /home/bob, then bigboy:/home/bob would be mounted.

#
# File: /etc/auto.home
#
*   bigboy:/home/&

Starting Automounter

Fedora Linux installs the automounter RPM, called autofs, by default . Here are some quick steps to get automounter started.

1) Use the chkconfig command to configure the automounter daemons to start at boot. Activate NFS file locking to reduce the risk of corrupted data.

[root@bigboy tmp]# chkconfig autofs on

2) Use the init scripts in the /etc/init.d directory to start the automounter daemons. The example uses the start option, but you can also stop and restart the process with the stop and restart options.

[root@bigboy tmp]# service autofs start

3) Use the pgrep command to determine whether automounter is running. If it is, the command will return the process ID of the automount daemon.

[root@bigboy tmp]# pgrep automount
32261
[root@bigboy tmp]#

As you can see, managing the startup of automounter is very similar to that of other Linux applications and should be easy to remember.

Automounter Examples

Now that you understand NFS automounter, you may benefit from an example. Chapter 30, "Configuring NIS", contains a full scenario in which a school computer laboratory uses automounter to centrally house all the home directories of its students. Additional centralization is also achieved by using NIS for login authentication, access, and accounting control.

Troubleshooting NFS

A basic NFS configuration usually works without problems when the client and server are on the same network. The most common problems are caused by forgetting to start NFS, to edit the /etc/fstab file, or to export the /etc/exports file. Another common cause of failure is the iptables firewall daemon running on either the server or client without the administrator realizing it.

When the client and server are on different networks, these checks still apply, but you'll also have to make sure basic connectivity has been taken care of as outlined in Chapter 4, "Simple Network Troubleshooting". Sometimes a firewall being present on the path between the client and server can cause difficulties.

As always, no troubleshooting plan would be complete without frequent reference to the /var/log/messages file when searching for additional clues. Table 29.2 shows some common NFS errors you'll encounter.

Table 29.2 Some Common NFS Error Messages

Option

Description

Too many levels of remote in path

Attempting to mount a filesystem that has already been mounted.

Permission denied

User is denied access. This could be the client's root user who has unprivileged status on the server due to the root_squash option. Could also be because the user on the client doesn't exist on the server.

No such host

Typographical error in the name of the server.

No such file or directory

Typographical error in the name of the file or directory: they don't exist.

NFS server is not responding

The server could be overloaded or down.

Stale file handle

A file that was previously accessed by the client was deleted on the server before the client closed it.

Fake hostname

Forward and reverse DNS entries don't exist for the NFS client.

The showmount Command

When run on the server, the showmount -a command lists all the currently exported directories. It also shows a list of NFS clients accessing the server, in this case one client has an IP address of 192.168.1.102.

[root@bigboy tmp]# showmount -a
All mount points on bigboy:
*:/home
192.168.1.102:*
[root@bigboy tmp]#

The "df" Command

The df command lists the disk usage of a mounted filesystem. Run it on the NFS client to verify that NFS mounting has occurred. In many cases, the root_squash mount option will prevent the root user from doing this, so it's best to try it as an unprivileged user.

[nfsuser@smallfry nfsuser]$ df -F nfs
Filesystem           1K-blocks      Used Available Use% Mounted on
192.168.1.100:/home/nfsuser
                       1032056    346552    633068  36% /home/nfsuser
[nfsuser@smallfry nfsuser]$

The nfsstat Command

The nfsstat command provides useful error statistics. The -s option provides NFS server stats, while the -c option provides them of for clients. Threshold guidelines are provided in Table 29.3.

[root@bigboy tmp]# nfsstat -s
Server rpc stats:
calls      badcalls   badauth    badclnt    xdrcall
1547       0          0          0          0
Server nfs v2:
null       getattr    setattr    root       lookup     readlink
244    100% 0       0% 0       0% 0       0% 0       0% 0       0%
read       wrcache    write      create     remove     rename
0       0% 0       0% 0       0% 0       0% 0       0% 0       0%
link       symlink    mkdir      rmdir      readdir    fsstat
0       0% 0       0% 0       0% 0       0% 0       0% 0       0%
 
Server nfs v3:
null       getattr    setattr    lookup     access     readlink
251    19% 332    25% 0       0% 265    20% 320    24% 0       0%
read       write      create     mkdir      symlink    mknod
39      2% 14      1% 1       0% 1       0% 0       0% 0       0%
remove     rmdir      rename     link       readdir     readdirplus
0       0% 0       0% 0       0% 0       0% 0       0% 31       2%
fsstat     fsinfo     pathconf   commit
1       0% 34      2% 0       0% 14      1%
 
[root@bigboy tmp]#

Table 29.3 Error Thresholds For The "nfsstat" Command

Value

Threshold

Description

readlink

> 10%

Excessive numbers of symbolic links slowing performance. Try to replace them with a directory and mount the filesystem directly on this new mount point.

getattr

> 50%

File attributes, like file data, is cached in NFS. This value tracks the percentage of file attribute reads that are not from cache refresh requests. Usually caused by the NFS "noac" mount option which prevents file attribute caching.

badcalls

> 0

Bad RPC requests. Could be due to poorly configured authentication, the root user attempting to access data governed by the "root_squash" directive or having a user in too many groups.

retrans

> 5%

Percentage of requests for service that the client had to retransmit to the servers. Could be due to slow NFS servers or poor network conditions.

writes

> 10%

Writes are slow due to poor caching values. Check the "noac" and "wsize" mount options.

read

> 10%

Reads are slow due to poor caching values. Check the "noac" and "rsize" mount options.

Other NFS Considerations

NFS can be temperamental. An incorrect configuration can cause it to be unresponsive. Its security is relatively weak, and you have to be aware of the file permissions on both the NFS client and server to get it to work correctly. Often these issues can be resolved with some basic guidelines outlined in this section.

Security

NFS and portmap have had a number of known security deficiencies in the past. As a result, I don't recommended using NFS over insecure networks. NFS doesn't encrypt data and it is possible for root users on NFS clients to have root access the server's filesystems. You can exercise security-related caution with NFS by following a few guidelines:

  • Restrict its use to secure networks
  • Export only the most needed data
  • Consider using read-only exports whenever data updates aren't necessary.
  • Use the root_squash option in /etc/exports file (default) to reduce the risk of the possibility of a root user on the NFS client having root file permission access on the NFS server. This is normally an undesirable condition, especially if the NFS client and NFS server are being managed by different sets of administrators.

These points should be the foundation of your NFS security policy, however, the list isn't comprehensive due to the concise scope of this book. I'd suggest that you refer to a dedicated NFS reference for more detailed advice.

NFS Hanging

As stated before, if the NFS server fails, the NFS client waits indefinitely for it to return. This also forces programs relying on the same client server relationship to wait indefinitely too.

For this reason, use the soft option in the NFS client's /etc/fstab file. This causes NFS to report an I/O error to the calling program after a long timeout.

You can reduce the risk of NFS hanging by taking a number of precautions:

  • Run NFS on a reliable network.
  • Avoid having NFS servers that NFS mount each other's filesystems or directories.
  • Always use the sync option whenever possible.
  • Do not have mission-critical computers rely on an NFS server to operate, unless the server's reliability can be guaranteed.
  • Do not include NFS-mounted directories as part of your search path, because a hung NFS connection to a directory in your search path could cause your shell to pause at that point in the search path until the NFS session is regained.

File Locking

NFS allows multiple clients to mount the same directory, but NFS has a history of not handling file locking well, although more recent versions are said to have rectified the problem. Test your network-based applications thoroughly before considering using NFS.

Nesting Exports

NFS doesn't allow you to export directories that are subdirectories of directories that have already been exported unless they are on different partitions.

Limiting root Access

NFS doesn't allow a root user on a NFS client to have root privileges on the NFS server. This can be disabled with the no_root_squash export option in the /etc/exports file. This is normally an undesirable condition, especially if the NFS client and NFS server are being managed by different sets of administrators.

Restricting Access to the NFS server

NFS doesn't provide restrictions on a per-user basis. If a user named nfsuser exists on the NFS client, then they will have access to all the files of a user named nfsuser on the NFS server. It is best, therefore, to use the /etc/exports file to limit access to certain trusted servers or networks.

You may also want to use a firewall to protect access to the NFS server. A main communication control channel is usually created between the client and server on TCP port 111, but the data is frequently transferred on a randomly chosen TCP port negotiated between them. There are ways to limit the TCP ports used, but that is beyond the scope of this book.

You may also want to eliminate any wireless networks between your NFS server and client, and it is not wise to mount an NFS share across the Internet as access could be either slow, intermittent or insecure.

File Permissions

The NFS file permissions on the NFS server are inherited by the client. It can become complicated especially if the users and user groups on the NFS client that are expected to access data on the NFS server don't exist on the NFS server.

For simplicity, make the key users and groups on both systems match and make sure the permissions on the NFS client mount point and the exported directories of the NFS server are in keeping with the your operational objectives.

Conclusion

As you have seen NFS can be a very powerful tool in providing clients with access to large amounts of data, such as a database stored on a centralized server. Many of the new network-attached storage products currently available on the market rely on NFS - a testament to its popularity, increasing stability, and improving security.