Daniel's Blog

Checking the NFS Server

There was an NFS Server setup about 10 years ago that's still running and hasn't been used in about 4 years. With a new server configuration added, it was decided to test the old NFS Server and see how it's going and what's going on with it.

Server Checks

How long has the server been up

uptime can be used to check how long the server has been up. This one has been up for almost five and a half years.

$ uptime
 22:17:43 up 1985 days, 16:41,  3 users,  load average: 0.00, 0.01, 0.05

Is NFS (Network File Service) running on the server at all?

$ sudo /etc/init.d/nfs-common status
all daemons running
$ sudo /etc/init.d/nfs-kernel-server status
nfsd running

Checking NFS Version

$ sudo nfsstat --server


Server rpc stats:
calls      badcalls   badclnt    badauth    xdrcall
1290048097   150        0          150        0

Server nfs v2:
null         getattr      setattr      root         lookup       readlink
1       100% 0         0% 0         0% 0         0% 0         0% 0         0%
read         wrcache      write        create       remove       rename
0         0% 0         0% 0         0% 0         0% 0         0% 0         0%
link         symlink      mkdir        rmdir        readdir      fsstat
0         0% 0         0% 0         0% 0         0% 0         0% 0         0%

Server nfs v3:
null         getattr      setattr      lookup       access       readlink
987     100% 0         0% 0         0% 0         0% 0         0% 0         0%
read         write        create       mkdir        symlink      mknod
0         0% 0         0% 0         0% 0         0% 0         0% 0         0%
remove       rmdir        rename       link         readdir      readdirplus
0         0% 0         0% 0         0% 0         0% 0         0% 0         0%
fsstat       fsinfo       pathconf     commit
0         0% 0         0% 0         0% 0         0%

Server nfs v4:
null         compound
32        0% 1290082318 99%

Server nfs v4 operations:
op0-unused   op1-unused   op2-future   access       close        commit
0         0% 0         0% 0         0% 2404150506  7% 2281125070  7% 2893885   0%
create       delegpurge   delegreturn  getattr      getfh        link
40490998  0% 0         0% 440302    0% 610273256  2% 2395966168  7% 7632942   0%
lock         lockt        locku        lookup       lookup_root  nverify
747984524  2% 1         0% 2521471261  8% 2411562061  7% 0         0% 0         0%
open         openattr     open_conf    open_dgrd    putfh        putpubfh
2290625385  7% 0         0% 14284     0% 0         0% 3320675894 10% 0         0%
putrootfh    read         readdir      readlink     remove       rename
10        0% 3262373112 10% 76098643  0% 51527     0% 2315350546  7% 7890083   0%
renew        restorefh    savefh       secinfo      setattr      setcltid
6561857   0% 7632942   0% 15523025  0% 968       0% 128586915  0% 387       0%
setcltidconf verify       write        rellockowner bc_ctl       bind_conn
387       0% 0         0% 3200887008 10% 2272144638  7% 0         0% 0         0%
exchange_id  create_ses   destroy_ses  free_stateid getdirdeleg  getdevinfo
0         0% 0         0% 0         0% 0         0% 0         0% 0         0%
getdevlist   layoutcommit layoutget    layoutreturn secinfononam sequence
0         0% 0         0% 0         0% 0         0% 0         0% 0         0%
set_ssv      test_stateid want_deleg   destroy_clid reclaim_comp
0         0% 0         0% 0         0% 0         0% 0         0%

All the calls are on nfs4 so that's what is being supported so that's good.

Check which directories are shared

$ sudo showmount --exports
Export list for BigAl:
/nfs server1,server2,server3

or check the config file

$ sudo cat /etc/exports
# /etc/exports: the access control list for filesystems which may be exported
#               to NFS clients.  See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes       hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
#
# Example for NFSv4:
# /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes  gss/krb5i(rw,sync,no_subtree_check)
#
#/nfs           10.1.25.0/24(rw,fsid=0,insecure,no_subtree_check,async)

/nfs           server1(rw,fsid=0,insecure,no_subtree_check,async)
/nfs           server2(rw,nohide,insecure,no_subtree_check,async,no_root_squash)
/nfs           server3(ro,nohide=0,insecure,no_subtree_check,async)

So the /nfs directory is shared to three servers. Interesting that it was so specific, rather than using a subnet as was initially done. It could be because of the nohide option being used.

That isn't a normal directory so let's check it using mount:

$ mount | grep /nfs
/dev/mapper/raidlvm-lv0 on /nfs/dir1 type ext4 (rw,relatime,user_xattr,barrier=1,stripe=52,data=ordered)
/dev/mapper/raidlvm-lv1 on /nfs/dir2 type ext4 (rw,relatime,user_xattr,barrier=1,stripe=52,data=ordered)
/dev/mapper/raidlvm-lv1 on /nfs/dir3 type ext4 (rw,relatime,user_xattr,barrier=1,stripe=52,data=ordered)
/dev/mapper/raidlvm-lv1 on /nfs/dir4 type ext4 (rw,relatime,user_xattr,barrier=1,stripe=52,data=ordered)

So there are 2 raid logical volumes that are being shared into 4 directories.

Reviewing the raid health is a different task, but if these were drives, a quick check of the drive health would be good.

Check the options being used

rw - The drive is mounted in rw mode (rather than ro for read only mode)
fsid=0 - For NFSv4, there is a distinguished filesystem which is the root of all exported filesystem. This is specified with fsid=root or fsid=0 both of which mean exactly the same thing.
insecure - Allow connections on any port, not just ports under 1024. Connections on ports 1024 and under is restricted to the root user. no_subtree_check - disable check for file in the exported tree
no_root_squash - disable changing root users accessing the share to the 'nobody' user and allow root users to access the share.
nohide - Used to allow a user who mounts the a root filesystem like /nfs and then mounts a folder under that /nfs/mydir1 to be able to have 1 mount that sees both filesystems. If nohide is specified. The nohide option is currently only effective on single host exports. It does not work reliably with netgroup, subnet, or wildcard exports.
async - allow access to a file to happen before it is written to the disk. Could result in corruption during a crash.

Client Checks

There are three known clients server1, server2, and server3

Checking the fstab to see if the nfs is mounted

$ cat /etc/fstab
# /etc/fstab: static file system information.
# <file system>                           <mount point>   <type>  <options>         <dump>  <pass>
UUID=a2944369-86c2-425a-96fd-c56ec64a5619 /               ext4    errors=remount-ro 0       1

nfsserver:/                               /nfs            nfs4    auto              0       0

These mount options were the same on all machines, but some of them are older versions of ubuntu so the mount had different options showing using mount

$ mount | grep nfs
nfsserver:/ on /nfs type nfs4 (rw,addr=10.1.1.232,clientaddr=10.1.1.229)
$ mount | grep nfs
nfsserver:/ on /nfs type nfs4 (rw,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.1.1.230,local_lock=none,addr=10.1.1.232)