When running over a high bandwidth network, make sure not to set these values too high above their default, not only will sequential read performance not improve, but the increased memory used by the NFS async threads will ultimately degrade overall performance of the system. If nfs3_nra is set to four, and if you have two processes reading two separate files concurrently over NFSVersion 3, the system by default will generate four read-aheads triggered by the read request of the first process, and four more read-aheads triggered by the read request of the second process for a total of eight concurrent read-aheads. The maximum number of concurrent read-aheads for the entire system is limited by the number of NFS async threads available. The kernel tunables nfs_max_threads and nfs3_max_threads control the maximum number of active NFS async threads active at once per filesystem. By default, a Solaris client uses eight NFS async threads per NFS filesystem. To drop the number of NFS async threads to two, add the following lines to /etc/system on the NFS client and reboot the system:set nfs:nfs_nra=2 set nfs:nfs3_nra=1
After rebooting, you will have reduced the amount of NFS read-ahead and write-behind performed by the client. Note that simply decreasing the number of kernel threads may produce an effect similar to that of eliminating them completely, so be conservative. Be careful when server performance is a problem, since increasing NFS async threads on the client machines beyond their default usually makes the server performance problems worse. The NFS async threads impose an implicit limit on the number of NFS requests requiring disk I/O that may be outstanding from any client at any time. Each NFS async thread has at most one NFS request outstanding at any time, and if you increase the number of NFS async threads, you allow each client to send more disk-bound requests at once, further loading the network and the servers. Decreasing the number of NFS async threads doesn't always improve performance either, and usually reduces NFS filesystem throughput. You must have some small degree of NFS request multithreading on the NFS client to maintain the illusion of having filesystem on local disks. Reducing or eliminating the number of NFS async threads effectively throttles the filesystem throughput of the NFS client -- diminishing or eliminating the amount of read-ahead and write-behind done. In some cases, you may want to reduce write-behind client requests because the network interface of the NFS server cannot handle that many NFS write requests at once, such as when you have the NFS client and NFS server on opposite sides of a 56-kbs connection. In these radical cases, adequate performance can be achieved by reducing the number of NFS async threads. Normally, an NFS async thread does write-behind caching to improve NFS performance, and running multiple NFS async threads allows a single process to have several write requests outstanding at once. If you are running eight NFS async threads on an NFS client, then the client will generate eight NFS write requests at once when it is performing a sequential write to a large file. The eight requests are handled by the NFS async threads. In contrast to the biod mechanism, when a Solaris process issues a new write requests while all the NFS async threads are blocked waiting for a reply from the server, the write request is queued in the kernel and the requesting process returns successfully without blocking. The requesting process does not issue an RPC to the NFS server itself, only the NFS async threads do. When an NFS async thread RPC call completes, it proceeds to grab the next request from the queue and sends a new RPC to the server. It may be necessary to reduce the number of NFS requests if a server cannot keep pace with the incoming NFS write requests. Reducing the number of NFS async threads accomplishes this; the kernel RPC mechanism continues to work without the async threads, albeit less efficiently.set nfs:nfs_max_threads=2 set nfs:nfs3_max_threads=2
18.4. NFS over wide-area networks | 18.6. Attribute caching |
Copyright © 2002 O'Reilly & Associates. All rights reserved.