top of page
  • Writer's picturekyle Hailey

TCP Trace Analysis for NFS


 

How do we know where latency comes from when  there is a disparity in reported I/O latency on  the I/O subsystem and that of the latency reported on the  client box requesting the I/O.

For example if I have an Oracle database requesting I/O  and Oracle says an 8Kb request takes 50 ms yet the I/O storage subsystem says 8Kb I/Os are taking 1ms (averages) , then where does the 49  extra ms come from?

When the I/O subsystem is connected to Oracle via NFS  then there are a lot of layers that could be causing the extra latency.


Where does the difference in latency come from between NFS Server and Oracle’s timing of pread?

One strategy is to take tcp traces on NFS server and NFS client  (the Oracle Host) at the same time, during a period of load that shows the disparity.

The two trace files can then be analyzed to show latency at each side and the delta of latency between packets found in both traces

Files Required

Parsing script is ( thanks to Matt Amdur from Delphix for the  core code)

The script requires access to  tethereal command which comes with the wireshark install on LINUX:

      yum install wireshark

Example situation

For example in the following table stats were collected on the NFS server   (running Open Solaris) with DTrace and on the Oracle database using   oramon.sh.  The stats show the average latency for 8Kb read requests

Avg latency of 8Kb reads measured in ms

NFS ServerOracle30.2126.1524.4103.9121.9117.3323.796.463278.4325.591.94

There is a substantial difference between the latency seen by NFS server and the latency seen by Oracle. Where is the latency coming from? Is it a problem on the NFS server, the NFS client or the network? How do we find out? One strategy is to collect TCP dumps on the NFS client and NFS server and compare them.

Setup problem situation

First, set up a situation where there is a load running on the database and there is a discrepancy between latency on NFS server and as seen by Oracle. During this period of latency discrepancy collect TCP traces. The traces will be large, so run them for 20 second for example, to limit the size.

Collecting TCP dumps

TCP dumps can be collected on Linux with tcpdump or  on Solaris with snoop:

NFS server (Solaris NFS server example)

snoop -q -d aggr0 -s 512 -o nfs_server.cap 172.16.100.102
  1. -d : device

  2. -o : output file, give “.cap” extension to be recognized automatically by wireshark

  3. -s : amount of packet to keep

NFS client host is the IP at the end of the command line

Linux (linux client example)

tcpdump -s 512  host 172.16.100.87 -w nfs_client.cap
  1. -s : amount of packet to keep

  2. -w : output file, give “.cap” extension to be recognized automatically by wireshark

  3. host : IP of NFS server

Analyzing the Traces

The trace files can be analyzed on a linux machine with wireshark installed and with the script parsetcp.pl  on the two traces.

$ ./parsetcp.pl nfs_server.cap nfs_client.cap
 ==================== RAW DATA ============

Parsing NFS server trace: nfs_server.cap
type       avg ms count
   READ : 18.10,  12493  
  WRITE : 30.08,   4918   

Parsing NFS client trace: nfs_client.cap
type       avg ms count  
   READ : 18.74,  25974 
  WRITE : 30.05,  10197   

 ==================== MATCHED DATA  ============

READ
type       avg ms 
nfs server: 18.12
nfs_client: 19.26
     diff :  1.14 
Processed 14010 packets (Matched: 11964 Missed: 2046)

WRITE
type       avg ms  
 server : 30.86    
 client : 31.83
   diff :  0.96 
Processed 5806 packets (Matched: 4391 Missed: 1415)

The script will find the average latency for reads and writes on both ends and then try to match the packets found both at the client and server and report the average latency for reads and writes on the packets that match. In the above output the read time on on the NFS client receiving end is on par with what is seen on NFS server. Thus if the receive time on the NFS client host is on par with the NFS server and Oracle latency is much greater than the receive time, then there looks to be some issue on the NFS client host machine such as heavy memory paging that would cause the latency seen by Oracle to be much higher than that seen by the tcp traces.

Example problem analysis

In the following 2 examples a swingbench load was run on the Oracle database on the NFS client. The database files were mounted via NFS from the NFS server. In one case the NFS client and database host was Open Solaris and in the other case it was LINUX. In both cases the Oracle database reported much slower latency than the NFS server threads.

test 1 ) Open Solaris is the NFS client

NFS Server is Open Solaris and NFS Server latency is measured with DTrace of using probes nfs start/done .

NFS client is a different machine running Open Solaris. Latency is measured from the Oracle perspective with oramon.sh:

NFS ServerOracle1.429.921.832.621.150.381.156.381.565.450.946.880.745.131.144.93139.480.955.8

Test 1 TCP trace analysis

What do the tcp traces say? Tracing TCP on the NFS server side, and NFS client side indicates that the slow latencies are coming from the NFS server

 ==================== Individual HOST DATA ============
Parsing nfs server trace: nfs_server.cap
type       avg ms count   
   READ : 44.60,   7731    

Parsing client trace: client.cap
type       avg ms count    
   READ : 46.54,  15282    

 ==================== MATCHED DATA  ============
READ
type       avg ms   
 server : 48.39,   
 client : 49.42,   
   diff :  1.03,    
Processed 9647 packets (Matched: 5624 Missed: 4023)

TEST 1 summary:

The NFS Server says it is responding with 1ms latency (from DTrace) but the TCP traces show latency more on the order of 48ms which is in alignment with what Oracle sees on the NFS client, thus the NFS server must be introducing the large latency somehow and not the client.

Test 2) LINUX NFS client

NFS Server is Open Solaris and NFS Server latency is measured with DTrace of using probes nfs start/done .

NFS client is a different machine running Linux. Latency is measured from the Oracle perspective with oramon.sh:

NFS ServerOracle0.135.730.158.90.144.880.154.430.154.570.163.92

Again we see the same stats as the first test case, ie the NFS Server says it’s latency is low and the Oracle database on the NFS client says the latency is slow

TCP trace analysis

==================== Individual HOST DATA ============
Parsing NFS server trace: nfs_server.cap
type       avg ms count   
   READ :  1.17,   9042            

Parsing client trace: client.cap
type       avg ms count    
   READ :  1.49,  21984   

==================== MATCHED DATA  ============
READ
type       avg ms count    
 server :  1.03  
 client :  1.49
   diff :  0.46

Processed 13815 packets (Matched: 8169 Missed: 5646) summary of test 2:

In this case the NFS Server TCP traces show fast latency as well as the NFS client TCP traces, thus the long latency times must be introduced on the client machine somehow and not the NFS Server.

Summary of test 1 and test 2

Tests are run with swingbench benchmark. The Oracle databases are set with the parameter “filesystemio_options=SETALL” which enables direct I/O, so that all I/O requests from Oracle should be become NFS requests to NFS server. Roughly the latency stats for 8k reads originating from the Oracle are

 Oracle on Linux Oracle on Solaris latency data source NFS Server .1 ms  2 msdtrace nfs:::op-read-start/op-read-done TCP trace NFS Server  1 ms 44 mssnoop TCP trace NFS Client  1.5 45 mstcpdump on LINUX and snoop on Open Solaris Oracle 58 ms 47 ms“db file sequential read” wait (which is basically a timing of “pread” for 8k random reads specifically


In the above examples there was nothing outstanding in the vmstat results on the NFS client and the NFS server.

In both cases the problem arose with high level of swingbench sessions. The latency disparity appeared above a user load of 16. The latency disparity increased as user load increased.

It turned out that in the case where the NFS server was introducing the disparity that the maximum NFS threads on the server was 16. The solution was to increase the NFS server thread max: sharectl set -p servers=512 nfs.

In the case where the LINUX client introduce the disparity it turns out that the maximum outstanding NFS requests was 16. The solution was to raise sunrpc.tcp_slot_table_entries = 128.


The tcpdump.pl script didn’t find the answers but it did identify which side of the connection was at fault and it would identify if the network itself was the problem.

One thing to keep in mind is that snoop and tcpdump can be resource intensive and may introduce a major amount of latency. In the above cases with and without tcp tracing enabled showed no more than 1ms of difference and the latency being investigated was an order of magnitude greater. In other cases where there has been more I/O between the NFS server and client and/or the transfer sizes were higher (like 1M) then the impact of TCP tracing has eclipsed the latency disparity being investigated and  tcp tracing would not have been appropriate investigation path.

 

Other information

$ sudo snoop -D
Using device aggr0 (promiscuous mode)
kyle -> 172.16.100.92 drops: 0 TCP D=50461 S=22 Push Ack=2424998673 Seq=3990657488 Len=84 Win=49640
172.16.100.92 -> kyle         drops: 0 TCP D=22 S=50461 Ack=3990657572 Seq=2424998673 Len=0 Win=254
172.16.100.147 -> (broadcast)  drops: 0 ARP C Who is 172.16.100.115, 172.16.100.115 ?
172.16.100.68 -> (broadcast)  drops: 0 ARP C Who is 172.16.100.115, 172.16.100.115 ?
172.16.100.68 -> (broadcast)  drops: 0 ARP C Who is 172.16.100.184, 172.16.100.184 ?
172.16.100.199 -> (broadcast)  drops: 0 ARP C Who is 172.16.100.200, 172.16.100.200 ?
6 views0 comments

Recent Posts

See All
bottom of page