NFS

NFS
"Network File System (NFS) is a network file system protocol originally developed by Sun Microsystems in 1983, allowing a user on a client computer to access files over a network as easily as if the network devices were attached to its local disks. NFS, like many other protocols, builds on the Open Network Computing Remote Procedure Call (ONC RPC) system. The Network File System is an open standard defined in RFCs, allowing anyone to implement the protocol." 

"A network file system is any computer file system that supports sharing of files, printers and other resources as persistent storage over a computer network. The first file servers were developed in the 1970s, and in 1985 Sun Microsystems created the file system called "Network File System" (NFS) which became the first widely used network file system. Other notable network file systems are Andrew File System (AFS), NetWare Core Protocol (NCP), and Server Message Block (SMB) which is also known as Common Internet File System (CIFS)." 

READ THIS
@2011.06

http://www.troubleshooters.com/linux/nfs.htm

good info about the options

NFS Server
CentOS 5.2 Installation: yum install nfs-utils portmap

Start services: service portmap start service nfs start

Re export all exports: /usr/sbin/exportfs -r exportfs -r

GUI NFS configuration: yum install system-config-nfs system-config-nfs

Firewall note: NFS operates on a fixed port (TCP 2049)

But there are several RPC ports needed as well. CentOS 5.2 rules: -A RH-Firewall-1-INPUT -m tcp -p tcp --dport 2049 -j ACCEPT -A RH-Firewall-1-INPUT -m udp -p udp --dport 2049 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state NEW -m udp -p udp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state NEW -m udp -p udp --dport 784 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 787 -j ACCEPT

-A RH-Firewall-1-INPUT -m state --state NEW -m udp -p udp --dport 652 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 655 -j ACCEPT

/etc/exports syntax [FOLDER]	[IP_ADDR]([PERMISSIONS]) [IP_ADDR2]([PERMISSIONS2])

/data		10.10.10.1(ro) 10.10.10.5(rw) /data		10.10.10.*(ro) /data		10.10.10.0/24(ro) /data		10.10.10.0/255.255.255.0(ro)
 * 1) multiple clients:
 * 1) wildcard options:

/data		10.10.10.1(ro) /data		10.10.10.1(rw) /data		10.10.10.1(rw,no_root_squash,no_all_squash,sync)
 * 1) permissions

/cdrom -ro	host1 host2 host3 /home -alldirs	10.0.0.2 10.0.0.3 10.0.0.4 /a -maproot=root	host.example.com box.example.org
 * 1) options

For all users (even root) to appear as specified user: /iso *(rw,sync,all_squash,anonuid=500,anongid=500) /pub *(rw,sync,all_squash,anonuid=500,anongid=500)

Show export list: exportfs exportfs -v showmount -e showmount -e localhost

Show clients: showmount showmount --all # with mount point

Note: this dumps from '/var/lib/nfs/rmtab' which does not appear to get cleaned out. Truncate before starting NFS to clear.

Show statistics: nfsstat

NFS Client
CentOS 5.2 Installation: yum install nfs-utils portmap

Start services: service portmap start service nfslock start # for nfs locking issues chkconfig portmap on chkconfig nfslock on

Show RPC info and versions for server: rpcinfo -p [SERVER_IP] rpcinfo -p 10.10.10.3

program vers proto  port 100003   2   tcp   2049  nfs 100003   3   tcp   2049  nfs 100003   4   tcp   2049  nfs

Show export list: showmount -e [SERVER_IP]

Mount NFS share: mount -t nfs 10.10.10.3:/data /mnt/data mount 10.10.10.3:/data /mnt/data
 * 1) mount [SERVER]:[PATH]   [MOUNTPOINT]

/etc/fstab examples: 192.168.0.1:/home /home  nfs  rw,bg,hard,intr,tcp,vers=3,wsize=4096,rsize=4096 0 0 newpu:/home/esx  /newpu  nfs  defaults  0 0 newpu:/home/esx  /newpu  nfs  defaults,auto  0 0 newpu:/home/esx  /newpu  nfs  _netdev,auto  0 0

NOTE: If you do not start portmap you will be unable to connect to NFS server.

NOTE: If you do not start nfslock you may get these errors when creating files: Apr 30 16:36:15 kmg kernel: lockd: cannot monitor 10.50.43.186 Apr 30 16:36:15 kmg kernel: lockd: failed to monitor 10.50.43.186

NFS v3 Through Firewall
rpcinfo -p | grep nfs rpcinfo -p

2049 udp and tcp

Port 111 (TCP and UDP) and 2049 (TCP and UDP) for the NFS server. There are also ports for Cluster and client status (Port 1110 TCP for the former, and 1110 UDP for the latter) as well as a port for the NFS lock manager (Port 4045 TCP and UDP). Only you can determine which ports you need to allow depending on which services are needed cross-gateway.

MOUNTD_PORT=port Controls which TCP and UDP port mountd (rpc.mountd) uses. STATD_PORT=port Controls which TCP and UDP port status (rpc.statd) uses. LOCKD_TCPPORT=port Controls which TCP port nlockmgr (lockd) uses. LOCKD_UDPPORT=port Controls which UDP port nlockmgr (lockd) uses.

Configure a firewall to allow NFS Allow TCP and UDP port 2049 for NFS. Allow TCP and UDP port 111 (rpcbind/sunrpc). Allow the TCP and UDP port specified with MOUNTD_PORT="port" Allow the TCP and UDP port specified with STATD_PORT="port" Allow the TCP port specified with LOCKD_TCPPORT="port" Allow the UDP port specified with LOCKD_UDPPORT="port"

sunrpc         111/tcp  rpcbind  #SUN Remote Procedure Call sunrpc         111/udp  rpcbind  #SUN Remote Procedure Call nfsd-status    1110/tcp          #Cluster status info nfsd-keepalive 1110/udp          #Client status info nfsd           2049/tcp nfs      # NFS server daemon nfsd           2049/udp nfs      # NFS server daemon lockd          4045/udp          # NFS lock daemon/manager lockd          4045/tcp          # NFS lock daemon/manager


 * 9.7.3. Running NFS Behind a Firewall - https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/s2-nfs-nfs-firewall-config.html
 * The FreeBSD Forums • View topic - which ports do i open for nfs? - https://forums.freebsd.org/viewtopic.php?&t=5123
 * ubuntu - Which ports do I need to open in the firewall to use NFS? - Server Fault 0 http://serverfault.com/questions/377170/which-ports-do-i-need-to-open-in-the-firewall-to-use-nfs

-

You need to open the following ports:

a] TCP/UDP 111 - RPC 4.0 portmapper

b] TCP/UDP 2049 - NFSD (nfs server)

c] Portmap static ports - Various TCP/UDP ports defined in /etc/sysconfig/nfs file.

-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state NEW -m udp -p udp --dport 111 -j ACCEPT -A RH-Firewall-1-INPUT -m tcp -p tcp --dport 2049 -j ACCEPT -A RH-Firewall-1-INPUT -m tcp -p tcp --dport [MOUNTD_PORT] -j ACCEPT -A RH-Firewall-1-INPUT -m udp -p udp --dport [MOUNTD_PORT] -j ACCEPT
 * 1) NFSv3 ##
 * 2) portmap:
 * 1) nfsd: (both NFS3 and NFS4)
 * 1) nfsd MOUNTD_PORT: (ie. 892) specified in /etc/sysconfig/nfs

MOUNTD_PORT:
 * 1) to specify STATD uncomment /etc/sysconfig/nfs and restart nfs:
 * 2)  #MOUNTD_PORT=892
 * 3) -A RH-Firewall-1-INPUT -m tcp -p tcp --dport 892 -j ACCEPT
 * 4) -A RH-Firewall-1-INPUT -m udp -p udp --dport 892 -j ACCEPT

Resource: Linux Iptables Allow NFS Clients to Access the NFS Server

NFSv4
Linux Home Server HOWTO - Network File System - Are You NFS4 Ready

"Version 4 of the NFS brings a new range of advancements to the networking protocol, with access control lists, sophisticated security mechanisms, and better interoperability with firewall and NAT applications to name a few. To the average user the main difference will be in the configuration and its implementation."

Preparing The Server
Find account used for NFS: $ grep nfs /etc/passwd nfsnobody:x:65534:65534:Anonymous NFS User:/var/lib/nfs:/sbin/nologin

The ID to name mapping daemon is a new enhancement to the NFSv4 protocol, it passes usernames between the client and server after mapping them from (and back to) UID and GID. $ vi /etc/idmapd.conf [General] Verbosity = 0 Pipefs-Directory = /var/lib/nfs/rpc_pipefs Domain = example.com [Mapping] Nobody-User = nfsnobody Nobody-Group = nfsnobody [Translation] Method = nsswitch

Setting Up The Server
For a version 4 server, all of the exports are handled through one export point (the pseudofilesystem), with all other exports grouped underneath the master export. All of the exports must be put into the one master directory, even if the original directories are located elsewhere in the filesystem.

$ mkdir /NFS4exports $ mkdir /NFS4exports/ftp $ mkdir /NFS4exports/home $ mkdir /NFS4exports/filestore

Bind external folders in /etc/fstab: /var/ftp        /NFS4exports/ftp         none     bind    0 0 /home           /NFS4exports/home        none     bind    0 0 /filestore      /NFS4exports/filestore   none     bind    0 0

Mount bind folders: mount -a -t none

Edit exports: $ vi /etc/exports /NFS4exports               192.168.1.0/24(rw,insecure,sync,wdelay,no_subtree_check,no_root_squash,fsid=0) /NFS4exports/ftp           192.168.1.0/24(ro,insecure,sync,wdelay,no_subtree_check,nohide,all_squash,anonuid=65534,anongid=65534) /NFS4exports/filestore     192.168.1.0/24(rw,insecure,sync,wdelay,no_subtree_check,nohide,no_root_squash) /NFS4exports/home          192.168.1.0/24(rw,insecure,sync,wdelay,no_subtree_check,nohide,no_root_squash)

Minimal export /NFS4exports               192.168.1.0/24(ro,fsid=0)

Start services: chkconfig nfs on chkconfig portmap on service nfs restart service portmap restart

Show exports: exportfs -v showmount -e localhost

Setup Client
Show exports: showmount -e [server]

Edit idmapd.conf: $ vi /etc/idmapd.conf [General] Verbosity = 0 Pipefs-Directory = /var/lib/nfs/rpc_pipefs Domain = example.com [Mapping] Nobody-User = nfsnobody Nobody-Group = nfsnobody [Translation] Method = nsswitch

Start idmap service: service rpcidmapd restart

Note: the mount will work without the idmapd service, but you will get a id mapping warning and user id mapping issues. This will not be a problem if you are using only read-only mode.

Create mount dir: mkdir /mnt/nfs4

Create Mounts in fstab: $ vi /etc/fstab [server]:/ /mnt/nfs4 nfs4 auto,rw,nodev,sync,_netdev,proto=tcp,retry=10,rsize=32768,wsize=32768,hard,intr 0 0

Manually mount: mount -t nfs4 [server]:/ /mnt/nfs4 \ -o async,auto,exec,_netdev,nodev,rw,retry=5,rsize=32768,wsize=32768,proto=tcp,hard,intr

Minimal mount: mount -t nfs4 [server]:/ /mnt/nfs4 -o async

Show mounts: $ mount -l [server]:/ on /mnt/nfs4 type nfs4 (rw,addr=[server])

Firewall
NFSv4 works great with iptables:

-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 2049 -j ACCEPT
 * 1) NFS4

Determine Source of Deletes
tcpdump dst port 2049 | egrep “remove|rmdir”

Maybe add -vv

Tutorials
Linux NFS-HOWTO

Performing an NFS Linux Installation - The How's and Why's

Cool Solutions: Setting up a Linux NFS Install Source for Your LAN

Overview
An entry in /etc/exports will typically look like this: directory machine1(option11,option12) machine2(option21,option22)

where

directory

the directory that you want to share. It may be an entire volume though it need not be. If you share a directory, then all directories under it within the same file system will be shared as well. machine1 and machine2

client machines that will have access to the directory. The machines may be listed by their DNS address or their IP address (e.g., machine.company.com or 192.168.0.8). Using IP addresses is more reliable and more secure. If you need to use DNS addresses, and they do not seem to be resolving to the right machine, see Section 7.3. optionxx

the option listing for each machine will describe what kind of access that machine will have. Important options are:

*

ro: The directory is shared read only; the client machine will not be able to write to it. This is the default. *

rw: The client machine will have read and write access to the directory. *

no_root_squash: By default, any file request made by user root on the client machine is treated as if it is made by user nobody on the server. (Excatly which UID the request is mapped to depends on the UID of user "nobody" on the server, not the client.) If no_root_squash is selected, then root on the client machine will have the same level of access to the files on the system as root on the server. This can have serious security implications, although it may be necessary if you want to perform any administrative work on the client machine that involves the exported directories. You should not specify this option without a good reason. *

no_subtree_check: If only part of a volume is exported, a routine called subtree checking verifies that a file that is requested from the client is in the appropriate part of the volume. If the entire volume is exported, disabling this check will speed up transfers. *

sync: By default, all but the most recent version (version 1.11) of the exportfs command will use async behavior, telling a client machine that a file write is complete - that is, has been written to stable storage - when NFS has finished handing the write over to the filesysytem. This behavior may cause data corruption if the server reboots, and the sync option prevents this. See Section 5.9 for a complete discussion of sync and async behavior.

Suppose we have two client machines, slave1 and slave2, that have IP addresses 192.168.0.1 and 192.168.0.2, respectively. We wish to share our software binaries and home directories with these machines. A typical setup for /etc/exports might look like this:

/usr/local  192.168.0.1(ro) 192.168.0.2(ro) /home       192.168.0.1(rw) 192.168.0.2(rw)

Here we are sharing /usr/local read-only to slave1 and slave2, because it probably contains our software and there may not be benefits to allowing slave1 and slave2 to write to it that outweigh security concerns. On the other hand, home directories need to be exported read-write if users are to save work on them.

If you have a large installation, you may find that you have a bunch of computers all on the same local network that require access to your server. There are a few ways of simplifying references to large numbers of machines. First, you can give access to a range of machines at once by specifying a network and a netmask. For example, if you wanted to allow access to all the machines with IP addresses between 192.168.0.0 and 192.168.0.255 then you could have the entries:

/usr/local 192.168.0.0/255.255.255.0(ro) /home     192.168.0.0/255.255.255.0(rw) Source: [Linux Online - Setting Up an NFS Server http://www.linux.org/docs/ldp/howto/NFS-HOWTO/server.html]

Example 1
/etc/exports: /data/export/ 192.168.0.0/255.255.255.0(rw,no_root_squash,no_all_squash,sync)

This means that /data/export will be accessible by all systems from the 192.168.0.x subnet. You can limit access to a single system by using 192.168.0.100/255.255.255.255 instead of 192.168.0.0/255.255.255.0, for example.

See to learn more about this: man 5 exports

Source: Setting Up A Highly Available NFS Server - Page 2 | HowtoForge - Linux Howtos and Tutorials

Example 2
To export the /cdrom directory to three example machines that have the same domain name as the server (hence the lack of a domain name for each) or have entries in your /etc/hosts file. The -ro flag makes the exported file system read-only. With this flag, the remote system will not be able to write any changes to the exported file system. /cdrom -ro host1 host2 host3

The following line exports /home to three hosts by IP address. This is a useful setup if you have a private network without a DNS server configured. Optionally the /etc/hosts file could be configured for internal hostnames; please review hosts(5) for more information. The -alldirs flag allows the subdirectories to be mount points. In other words, it will not mount the subdirectories but permit the client to mount only the directories that are required or needed. /home -alldirs  10.0.0.2 10.0.0.3 10.0.0.4

The following line exports /a so that two clients from different domains may access the file system. The -maproot=root flag allows the root user on the remote system to write data on the exported file system as root. If the -maproot=root flag is not specified, then even if a user has root access on the remote system, he will not be able to modify files on the exported file system. /a -maproot=root  host.example.com box.example.org

In /etc/exports, each line represents the export information for one file system to one host. A remote host can only be specified once per file system, and may only have one default entry. For example, assume that /usr is a single file system. The following /etc/exports would be invalid: /usr/src  client /usr/ports client
 * 1) Invalid when /usr is one file system

One file system, /usr, has two lines specifying exports to the same host, client. The correct format for this situation is: /usr/src /usr/ports client

The properties of one file system exported to a given host must all occur on one line. Lines without a client specified are treated as a single host. This limits how you can export file systems, but for most people this is not an issue.

The following is an example of a valid export list, where /usr and /exports are local file systems: /usr/src /usr/ports -maproot=root   client01 /usr/src /usr/ports              client02 /exports -alldirs -maproot=root     client01 client02 /exports/obj -ro
 * 1) Export src and ports to client01 and client02, but only
 * 2) client01 has root privileges on it
 * 1) The client machines have root and can mount anywhere
 * 2) on /exports. Anyone in the world can mount /exports/obj read-only

The mountd daemon must be forced to recheck the /etc/exports file whenever it has been modified, so the changes can take effect. This can be accomplished either by sending a HUP signal to the running daemon:
 * 1) kill -HUP `cat /var/run/mountd.pid`

or by invoking the mountd rc(8) script with the appropriate parameter:
 * 1) /etc/rc.d/mountd onereload

Alternatively, a reboot will make FreeBSD set everything up properly. A reboot is not necessary though. Executing the following commands as root should start everything up.

On the NFS server:
 * 1) rpcbind
 * 2) nfsd -u -t -n 4
 * 3) mountd -r

On the NFS client:
 * 1) nfsiod -n 4

Now everything should be ready to actually mount a remote file system. In these examples the server's name will be server and the client's name will be client. If you only want to temporarily mount a remote file system or would rather test the configuration, just execute a command like this as root on the client:
 * 1) mount server:/home /mnt

This will mount the /home directory on the server at /mnt on the client. If everything is set up correctly you should be able to enter /mnt on the client and see all the files that are on the server.

If you want to automatically mount a remote file system each time the computer boots, add the file system to the /etc/fstab file. Here is an example:

server:/home  /mnt    nfs rw  0   0

Source: Network File System (NFS)

Example 3
/ 192.168.0.0/24(rw) /mnt/cdrom 192.168.0.0/24(ro) /mnt/floppy 192.168.0.0/24(rw) /backup 192.168.0.0/24(rw) /mp3s 192.168.0.0/24(rw)

Please note that the exports file is very sensitive in regards to syntax/formatting. If you have a space between the allowed host(s) or IPs like this:

/nfs/dir somehost.box.com (rw)
 * 1) Example of invalid syntax (space between client and permissions)

You will not be able to access the resource in read write mode.

Source: Linux Help - NFS Setup Guide

Windows NFS Server
Download details: Windows Services for UNIX Version 3.5
 * "Windows Services for UNIX 3.5 provides a full range of supported and fully integrated cross-platform network services for enterprise customers to use in integrating Windows into their existing UNIX-based environments."

HOW TO: Set Up Server for NFS

HOW TO: Share Windows Folders by Using Server for NFS

Cygwin NFS Server HOWTO

Windows NFS Client
How to install Client for NFS on Windows for a UNIX-to-Windows migration

Not Auto Mounting
Solution: chkconfig netfs on
 * NFS auto mounting is provided by the 'netfs' service

Unable to reshare NFS mount
Error: exportfs: /pub does not support NFS export

Solution: NFS does not support being reshared