top of page
  • Writer's picturekyle Hailey

NFS versus dNFS


 


NFS versus dNFS


Finally got to take Kevin Closson’s SLOB for spin. (BTW can one test dNFS with Orion at all?) Nice write up on SLOB by Yury Velikanov at Pythian: http://www.pythian.com/news/33299/my-slob-io-testing-index/

Also see Karl Arao’s SLOB cheat sheet.

NEW: see slob on steroids at http://www.pythian.com/news/34605/slob-on-steroids-part-1/ I ran into the same issues Pythian came up with

  1. turn off default AWR

  2. dbms_workload_repository.MODIFY_SNAPSHOT_SETTINGS(51120000, 51120000, 100, null)

  3. reduce db_cache_size (cpu_count didn’t seem to work. The only way I got db_cache_size down was by allocating a lot to the shared pool)

  4. *.sga_max_size=554M *.sga_target=554M *.shared_pool_size=450M *.db_cache_size=40M *.cpu_count=1 *.large_pool_size=50M

  5. avoid db file parallel reads

  6. *._db_block_prefetch_limit=0 *._db_block_prefetch_quota=0 *._db_file_noncontig_mblock_read_count=0

The goal was to test NFS verses DNFS.

I didn’t expect DNFS to have much of an impact in the basic case. I think of DNFS as something that adds flexibility like multi pathing and fail over and takes care of all the mount  issues. In some case like on LINUX where the maximum outstanding RPC is 128, then  Oracle DNFS as I understand it can go beyond these limits. I’m also guessing  that in the case of rac DNFS will avoid the overhead of getattr calls that would be required with noac that is required with out DNFS. (on single instance noac can be taken off the mount options)

Setup

DB host Linux version 2.6.18-164.el5 

with
/etc/sysctl.conf
sunrpc.tcp_slot_table_entries = 128
sysctl -p

Oracle  11.2.0.2.0

Ran SLOB straight through with default reader.sql which does 5000 loops. This took about 12+ hours to complete 18 tests – 9 on NFS and 9 on DNFS of varying user loads.

The stats showed dNFS impact to be variable depending on the load.

I then change reader.sql to do 200 loops in order to run more tests faster and get a handle on the standard deviation between tests.

With that change, then ran the test alternatively between NFS and dNFS, ie I ran 1,2,3,4,8,16,32,64 users loads on NFS then switch to dNFS and ran the same tests. Then did this 4 times.

The stats were a bit different than the first run. I then ran the test 8 times (ie full user ramp up on DNFS then NFS  , repeat 8 times)

Here are the stats.

Looks like there is a pretty consistent performance degradation on dNFS at 2 users and a good performance improvement at 8 users. At 16 users the test is just under line speed (1GbE, so around 105MB/s) By 32 users the test is hitting line saturation so I grayed out those lines.

The NFS server caches a lot thus the fast I/O times.

% impact = 100 * ( NFS time – dNFS time )/ NFS time , ie % better or worse of dNFS over NFS5000 loops200 loops x4200 loops             x8usersavg ms% impactavg ms% impactStdevavg ms% impactStdev10.440.620.43-3.6810.760.45-7.909.2420.43-16.620.49-19.9515.210.49-20.973.6730.576.610.57-23.4013.490.56-8.625.6340.645.980.58-0.804.970.60-0.865.1280.7617.170.6914.0411.330.6811.802.25161.2911.861.208.5615.701.208.342.23322.23-2.852.311.8316.412.332.042.31644.23-8.384.621.7512.034.672.430.761288.180.731.86

At this point there is a clear degradation at 2 and improvement at 8 and the line maxes out just above 16, so decided to run the tests with 1-16 users, 8 times each, and here is the data:


Again, there is apparent degradation around 2 and 3 users and performance improvement peaks around 10 users, with a maximum saved time of 130us.

Here is the script to run the tests NOTE:you have to be in the directory with “runit.sh” from SLOB and you have to have backed up libodm11.so as libodm11.so.orig

slob.sh:

#!/bin/bash
# see: http://www.pythian.com/news/33299/my-slob-io-testing-index/

# if EVAL=0, then script only outputs commands without running them
EVAL=1
USERS="1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16"
REPEAT="1 2 3 4 5 6 7 8"
TYPES="DNFS NFS "
function DNFS
{
   sqlplus / as sysdba << EOF
   shutdown abort
EOF
   cp  $ORACLE_HOME/lib/libnfsodm11.so $ORACLE_HOME/lib/libodm11.so
   sqlplus / as sysdba << EOF
   startup
EOF
}
function NFS
{
  sqlplus / as sysdba << EOF
  shutdown abort
EOF
  cp  $ORACLE_HOME/lib/libodm11.so.orig $ORACLE_HOME/lib/libodm11.so
  sqlplus / as sysdba << EOF
  startup
EOF
}
function param {
for i in 1; do
cat << EOF select name,value from v\$parameter where ISDEFAULT='FALSE'; select count(*) from v\$dnfs_servers; select count(*) from v\$dnfs_files; select count(*) from v\$dnfs_channels; show sga exit EOF done > param.sql
}
function runcmd {
    echo $cmd
    if   [ $EVAL  -eq 1 ] ; then
       eval  $cmd
    fi
}
param
for type in $TYPES; do
  [[ ! -d $type ]] && mkdir $type
done
for runnumber in $REPEAT;do
  echo "runumber $runnumber"
  for type in $TYPES; do
    if   [ $EVAL  -eq 1 ] ; then
       eval  $type
    fi
    echo "type  $type, run number $runnumber"
    for users in $USERS; do
      echo "Users $users"
      for output in 1; do
        cmd="sqlplus / as sysdba @param"
        runcmd $cmd
        cmd="./runit.sh 0 $users"
        runcmd $cmd
        cmd="mv awr.txt ${type}/awr_${users}_${runnumber}.txt"
        runcmd $cmd
      done > ${type}/${type}_${users}_${runnumber}.out 2>&1
    done
  done
done

To extract the data after the test you can grep out  the sequential read performance in the AWR files that end in “txt”:

$ grep ‘db file sequential read’ *NFS/*txt | grep User

DNFS/awr_1_1.txt:db file sequential read              32,032          14      0   88.1 User I/O NFS/awr_1_1.txt:db file sequential read              34,633          16      0   88.8 User I/O DNFS/awr_2_1.txt:db file sequential read              88,826          52      1   95.4 User I/O NFS/awr_2_1.txt:db file sequential read              92,367          41      0   95.1 User I/O DNFS/awr_3_1.txt:db file sequential read             141,068          92      1   96.0 User I/O NFS/awr_3_1.txt:db file sequential read             145,057          81      1   96.2 User I/O DNFS/awr_4_1.txt:db file sequential read             192,897         111      1   95.2 User I/O NFS/awr_4_1.txt:db file sequential read             197,535         118      1   96.0 User I/O DNFS/awr_5_1.txt:db file sequential read             246,322         153      1   95.3 User I/O NFS/awr_5_1.txt:db file sequential read             249,038         141      1   96.2 User I/O DNFS/awr_6_1.txt:db file sequential read             297,759         181      1   95.4 User I/O NFS/awr_6_1.txt:db file sequential read             301,305         199      1   96.5 User I/O DNFS/awr_7_1.txt:db file sequential read             349,475         216      1   95.5 User I/O NFS/awr_7_1.txt:db file sequential read             352,788         244      1   96.5 User I/O DNFS/awr_8_1.txt:db file sequential read             402,262         266      1   95.8 User I/O NFS/awr_8_1.txt:db file sequential read             405,099         282      1   96.5 User I/O DNFS/awr_9_1.txt:db file sequential read             453,341         306      1   96.2 User I/O NFS/awr_9_1.txt:db file sequential read             456,009         345      1   96.7 User I/O

Alternatively calculate the average single  block read time in ms:

grep ‘db file sequential read’ *NFS/*txt | grep User | sort -n -k 5 | sed -e ‘s/,//g’ | awk ‘{ printf(“%20s %10.2f %10d %10d\n”, $1, $6/$5*1000, $5, $6 ) }’ DNFS/awr_1_1.txt:db       0.44      32032         14 NFS/awr_1_1.txt:db       0.46      34633         16 DNFS/awr_2_1.txt:db       0.59      88826         52 NFS/awr_2_1.txt:db       0.44      92367         41 DNFS/awr_3_1.txt:db       0.65     141068         92 NFS/awr_3_1.txt:db       0.56     145057         81 DNFS/awr_4_1.txt:db       0.58     192897        111 NFS/awr_4_1.txt:db       0.60     197535        118 DNFS/awr_5_1.txt:db       0.62     246322        153 NFS/awr_5_1.txt:db       0.57     249038        141 DNFS/awr_6_1.txt:db       0.61     297759        181 NFS/awr_6_1.txt:db       0.66     301305        199 DNFS/awr_7_1.txt:db       0.62     349475        216 NFS/awr_7_1.txt:db       0.69     352788        244 DNFS/awr_8_1.txt:db       0.66     402262        266 NFS/awr_8_1.txt:db       0.70     405099        282 DNFS/awr_9_1.txt:db       0.67     453341        306 NFS/awr_9_1.txt:db       0.76     456009        345 DNFS/awr_10_1.txt:db       0.74     507722        376 NFS/awr_10_1.txt:db       0.85     508800        433

Or to get the averages, max, min and standard deviations,

grep 'db file sequential read' *NFS/*txt | \
grep User | \
# lines look like
# NFS/awr_9_6.txt:db file sequential read   454,048   359    1 97.0 User I/O
# first number is count
# second number is time in seconds
# names are like "NFS/awr_9_6.txt:db file sequential read   "
# break 9_6 out as a number 9.6 and sort by those numbers
sed -e 's/_/ /' | \
sed -e 's/_/./' | \
sed -e 's/.txt/ /' | \
sort -n -k 2 | \
sed -e 's/,//g' | \
# lines now look like
# NFS/awr 9.6 db file sequential read   454,048   359    1 97.0 User I/O
# $2 is userload.run_number
# $7 is count
# $8 is time in seconds
# print out test # and avg ms
awk '
       { printf("%10s %10.2f ", $2,  $8/$7*1000 ) }
/^NFS/ { printf("\n"); }
' | \
# print out user load
# print out %impact of dns
awk '{printf("%10s %10.2f %10.2f\n",int($1),($4+$2)/2,($4-$2)/$2*100) }
END{printf("%d\n",17) }
' | \
# for each test, each user load
# print avg ms, avg impact, std of impact, min and max impact
awk 'BEGIN{row=1;nrow=0}
{ if ( row != $1 ) {
   if ( nrow > 0 ) {
     printf("%10s ", row )
     row = $1
     for(x=1;xmax) {max=val};
   if(val< min) {min=val};
}'

         1       0.46       1.93      14.69     -14.00       35.00
         2       0.47     -13.24       8.14     -25.42        0.00
         3       0.61     -13.88       4.14     -21.13       -6.25
         4       0.61       0.10       5.40     -11.29        8.62
         5       0.60      -1.60       7.94      -9.68       11.67
         6       0.64      12.10       6.33      -1.54       19.30
         7       0.67      10.73       5.53       3.03       21.31
         8       0.70      12.46       4.42       6.06       19.70
         9       0.73      14.77       2.94      11.43       19.70
        10       0.78      17.08       2.91      13.89       23.61
        11       0.85      13.48       2.90       8.64       17.95
        12       0.92      10.34       2.76       6.90       16.28
        13       0.99      11.83       3.92       6.45       18.28
        14       1.04       7.23       4.42       0.00       13.13
        15       1.12       8.03       4.18      -0.93       13.21
        16       1.21       9.37       2.78       5.08       14.04

2 views0 comments

Commentaires


bottom of page