1. Data ONTAP 8.1 cluster-mode introduces support for NFSv4 protocol specification as well as elements of NFSv 4.1
2. cluster mode continues to fully support NFSv2 and NFSv3 although you should not use NFSv2 with cluster mode
3. NFSv4 support brings the Data ONTAP 8.1 cluster mode operating system in parity with the Data ONTAP 7.3 operating system
4. The key feature of NFSv4 is referrals, NFSv4.1 is a minor revision of version 4.0 and is an extension of version 4 not a modification, so it's fully compliant with the NFSv4 specification it extends delegations beyond files to directories and send links introduces NFS sessions for enhanced efficiency and reliability provides parallel NFS, pNFS
Remote file access:
-It is defined as the file access in which a client connected to a logical interface LIF going to a physical port on one controller accesses a file that is hosted on a different controller in the same cluster
-Remote file access has traditionally been a performance concern for clients which has been fixed with Data ONTAP Cluster mode operating system
A client is mounted to a Data LIF that is hosted on node1 and has a file operation with the destination in a volume on node4 the request is serviced by the node1 protocol stack, that protocol stack looks for the location of the volume and directs the operation to node4; which hosts the volume. The request traverses the cluster network and the result is returned to the client along the same path
With pNFS when a file is open by an NFS client that mounted data LIF on node1 serves as the metadata path, because this is the path that will be used to carry out discovery of the target volumes location. If the data is hosted by node1 the operation will be managed locally in this case the local node discovers that the data is on node4, based on the pNFS protocol the client will be redirected to a LIF hosted on node4 the request as well as subsequent requests to the volume are serviced locally bypassing the cluster network
when a volume is moved to an aggregate on a different node the pNFS client data path is redirected to a Data LIF hosted on the destination node
To enable pNFS:
cluster1::> vserver nfs modify -v 4.1 -pnfs enabled
Note: Clients should be able to support pNFS, it is supported with RHEL 6.2 and Fedora 14.
NFSv4 referrals and pNFS do not work together. By keeping network accessing data local pNFS reduces the amount of traffic that traverses is the cluster network. Unlike with NFS referrals, pNFS works seamlessly to the client it does not require a file system remount to ensure an optimized path with pNFS because the network redirect does not happen at Mount time. The final handle will not be left stale when a volume is moved to an aggregate on a different node
SMB 2.0 and SMB 2.1
1. In addition to the SMB 1.0 protocol, the Data ONTAP cluster-mode operating system now supports SMB 2.0 and SMB 2.1.
2. SMB 2.0 was a major revision of the SMB 1.0 protocol including a complete reworking of the packet format SMB 2.0 also introduces several performance improvements relative to previous versions.
3. Efficient network utilization request compounding which stacks, multiple SMB into a single network packet larger read and write sizes to exploit faster networks File and directory.
4. Property caching durable file handles to allow SMB connection to transparently reconnect to the server if a temporary disconnection occurs such as over a wireless connection.
5. Improve message signing with improved configuration and interoperability with HMAC SHA-256 replacing MD5 at a hashing algorithm SMB 2.1.
6. Provides important performance enhancements, these enhancements include the following -client opportunistic lock (Opplock leasing model)
-large maximum transmission unit MTU support
-improved energy efficiency for client computers support for previous versions of SMB
7. The SMB 2.1 protocol provides several minor enhancements to the SMB 2.0 specification 8. Data ONTAP 8.1 cluster mode supports most but not all of the SMB 2.1 features.
9. The following SMB 2.1 features are not supported
-large MTU resilient handles
-branch cache support for SMB 2.1 is automatically enabled when you enable the SMB 2.0 protocol on a virtual server (Vserver)
Use the following command to enable SMB to point out for the server:
cluster1::> vserver cifs options modify -vserver vs1 -smb2 -enabled true
Features of SMB 2.1 Leases:
-File and metadata caching
-Reduces bandwidth consumption
-Retention of cached data after a file is closed
-Full caching with multiple handles as long as those handles are opened on same client