EzDevInfo.com

nfs interview questions

Top nfs frequently asked interview questions

Vagrant NFS share doesn't show updated file if size doesn't change

When mounting /vagrant over NFS, a changed file on the host is not refresh on the guest if the size doesn't changes. Quick update/typo are not immediately reflected unless I make enough modification for the size to be different.

I've tried to set lookupcache=none but apart from making everything slower, nothing change.

I'm using OSX ML as host and Arch Linux as guest. NFS is v3 (because of OSX).


Source: (StackOverflow)

Troubleshooting latency spikes on ESXi NFS datastores

I'm experiencing fsync latencies of around five seconds on NFS datastores in ESXi, triggered by certain VMs. I suspect this might be caused by VMs using NCQ/TCQ, as this does not happen with virtual IDE drives.

This can be reproduced using fsync-tester (by Ted Ts'o) and ioping. For example using a Grml live system with a 8GB disk:

Linux 2.6.33-grml64:
root@dynip211 /mnt/sda # ./fsync-tester
fsync time: 5.0391
fsync time: 5.0438
fsync time: 5.0300
fsync time: 0.0231
fsync time: 0.0243
fsync time: 5.0382
fsync time: 5.0400
[... goes on like this ...]

That is 5 seconds, not milliseconds. This is even creating IO-latencies on a different VM running on the same host and datastore:

root@grml /mnt/sda/ioping-0.5 # ./ioping -i 0.3 -p 20 .
4096 bytes from . (reiserfs /dev/sda): request=1 time=7.2 ms
4096 bytes from . (reiserfs /dev/sda): request=2 time=0.9 ms
4096 bytes from . (reiserfs /dev/sda): request=3 time=0.9 ms
4096 bytes from . (reiserfs /dev/sda): request=4 time=0.9 ms
4096 bytes from . (reiserfs /dev/sda): request=5 time=4809.0 ms
4096 bytes from . (reiserfs /dev/sda): request=6 time=1.0 ms
4096 bytes from . (reiserfs /dev/sda): request=7 time=1.2 ms
4096 bytes from . (reiserfs /dev/sda): request=8 time=1.1 ms
4096 bytes from . (reiserfs /dev/sda): request=9 time=1.3 ms
4096 bytes from . (reiserfs /dev/sda): request=10 time=1.2 ms
4096 bytes from . (reiserfs /dev/sda): request=11 time=1.0 ms
4096 bytes from . (reiserfs /dev/sda): request=12 time=4950.0 ms

When I move the first VM to local storage it looks perfectly normal:

root@dynip211 /mnt/sda # ./fsync-tester
fsync time: 0.0191
fsync time: 0.0201
fsync time: 0.0203
fsync time: 0.0206
fsync time: 0.0192
fsync time: 0.0231
fsync time: 0.0201
[... tried that for one hour: no spike ...]

Things I've tried that made no difference:

  • Tested several ESXi Builds: 381591, 348481, 260247
  • Tested on different hardware, different Intel and AMD boxes
  • Tested with different NFS servers, all show the same behavior:
    • OpenIndiana b147 (ZFS sync always or disabled: no difference)
    • OpenIndiana b148 (ZFS sync always or disabled: no difference)
    • Linux 2.6.32 (sync or async: no difference)
    • It makes no difference if the NFS server is on the same machine (as a virtual storage appliance) or on a different host

Guest OS tested, showing problems:

  • Windows 7 64 Bit (using CrystalDiskMark, latency spikes happen mostly during preparing phase)
  • Linux 2.6.32 (fsync-tester + ioping)
  • Linux 2.6.38 (fsync-tester + ioping)

I could not reproduce this problem on Linux 2.6.18 VMs.

Another workaround is to use virtual IDE disks (vs SCSI/SAS), but that is limiting performance and the number of drives per VM.

Update 2011-06-30:

The latency spikes seem to happen more often if the application writes in multiple small blocks before fsync. For example fsync-tester does this (strace output):

pwrite(3, "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"..., 1048576, 0) = 1048576
fsync(3)                                = 0

ioping does this while preparing the file:

[lots of pwrites]
pwrite(3, "********************************"..., 4096, 1036288) = 4096
pwrite(3, "********************************"..., 4096, 1040384) = 4096
pwrite(3, "********************************"..., 4096, 1044480) = 4096
fsync(3)                                = 0

The setup phase of ioping almost always hangs, while fsync-tester sometimes works fine. Is someone capable of updating fsync-tester to write multiple small blocks? My C skills suck ;)

Update 2011-07-02:

This problem does not occur with iSCSI. I tried this with the OpenIndiana COMSTAR iSCSI server. But iSCSI does not give you easy access to the VMDK files so you can move them between hosts with snapshots and rsync.

Update 2011-07-06:

This is part of a wireshark capture, captured by a third VM on the same vSwitch. This all happens on the same host, no physical network involved.

I've started ioping around time 20. There were no packets sent until the five second delay was over:

No.  Time        Source                Destination           Protocol Info
1082 16.164096   192.168.250.10        192.168.250.20        NFS      V3 WRITE Call (Reply In 1085), FH:0x3eb56466 Offset:0 Len:84 FILE_SYNC
1083 16.164112   192.168.250.10        192.168.250.20        NFS      V3 WRITE Call (Reply In 1086), FH:0x3eb56f66 Offset:0 Len:84 FILE_SYNC
1084 16.166060   192.168.250.20        192.168.250.10        TCP      nfs > iclcnet-locate [ACK] Seq=445 Ack=1057 Win=32806 Len=0 TSV=432016 TSER=769110
1085 16.167678   192.168.250.20        192.168.250.10        NFS      V3 WRITE Reply (Call In 1082) Len:84 FILE_SYNC
1086 16.168280   192.168.250.20        192.168.250.10        NFS      V3 WRITE Reply (Call In 1083) Len:84 FILE_SYNC
1087 16.168417   192.168.250.10        192.168.250.20        TCP      iclcnet-locate > nfs [ACK] Seq=1057 Ack=773 Win=4163 Len=0 TSV=769110 TSER=432016
1088 23.163028   192.168.250.10        192.168.250.20        NFS      V3 GETATTR Call (Reply In 1089), FH:0x0bb04963
1089 23.164541   192.168.250.20        192.168.250.10        NFS      V3 GETATTR Reply (Call In 1088)  Directory mode:0777 uid:0 gid:0
1090 23.274252   192.168.250.10        192.168.250.20        TCP      iclcnet-locate > nfs [ACK] Seq=1185 Ack=889 Win=4163 Len=0 TSV=769821 TSER=432716
1091 24.924188   192.168.250.10        192.168.250.20        RPC      Continuation
1092 24.924210   192.168.250.10        192.168.250.20        RPC      Continuation
1093 24.924216   192.168.250.10        192.168.250.20        RPC      Continuation
1094 24.924225   192.168.250.10        192.168.250.20        RPC      Continuation
1095 24.924555   192.168.250.20        192.168.250.10        TCP      nfs > iclcnet_svinfo [ACK] Seq=6893 Ack=1118613 Win=32625 Len=0 TSV=432892 TSER=769986
1096 24.924626   192.168.250.10        192.168.250.20        RPC      Continuation
1097 24.924635   192.168.250.10        192.168.250.20        RPC      Continuation
1098 24.924643   192.168.250.10        192.168.250.20        RPC      Continuation
1099 24.924649   192.168.250.10        192.168.250.20        RPC      Continuation
1100 24.924653   192.168.250.10        192.168.250.20        RPC      Continuation

2nd Update 2011-07-06:

There seems to be some influence from TCP window sizes. I was not able to reproduce this problem using FreeNAS (based on FreeBSD) as a NFS server. The wireshark captures showed TCP window updates to 29127 bytes in regular intervals. I did not see them with OpenIndiana, which uses larger window sizes by default.

I can no longer reproduce this problem if I set the following options in OpenIndiana and restart the NFS server:

ndd -set /dev/tcp tcp_recv_hiwat 8192 # default is 128000
ndd -set /dev/tcp tcp_max_buf 1048575 # default is 1048576

But this kills performance: Writing from /dev/zero to a file with dd_rescue goes from 170MB/s to 80MB/s.

Update 2011-07-07:

I've uploaded this tcpdump capture (can be analyzed with wireshark). In this case 192.168.250.2 is the NFS server (OpenIndiana b148) and 192.168.250.10 is the ESXi host.

Things I've tested during this capture:

Started "ioping -w 5 -i 0.2 ." at time 30, 5 second hang in setup, completed at time 40.

Started "ioping -w 5 -i 0.2 ." at time 60, 5 second hang in setup, completed at time 70.

Started "fsync-tester" at time 90, with the following output, stopped at time 120:

fsync time: 0.0248
fsync time: 5.0197
fsync time: 5.0287
fsync time: 5.0242
fsync time: 5.0225
fsync time: 0.0209

2nd Update 2011-07-07:

Tested another NFS server VM, this time NexentaStor 3.0.5 community edition: Shows the same problems.

Update 2011-07-31:

I can also reproduce this problem on the new ESXi build 4.1.0.433742.


Source: (StackOverflow)

Advertisements

Are there any free NFS clients for Windows 7?

I have Windows 7 Professional, but the NFS Client for Windows is only included in the Enterprise and Ultimate editions.

I would like to connect some Windows machines to our NFS server so I can drop the Samba Server we are currently using.

Are there any free NFS clients for Windows 7?

I know there are some tools like ProNFS and axeNFS but they are not free.


Source: (StackOverflow)

Which ports do I need to open in the firewall to use NFS?

I'm running Ubuntu 11.10 - setting up NFS to share a directory among many other servers. Which ports are required to be opened on the firewall?


Source: (StackOverflow)

How to properly set permissions for NFS folder? Permission denied on mounting end.

I'm trying to connect to an NFS folder on my dev server. The owner of the folder on the dev server is darren and group darren.

When I export and mount it to my Mac using the Disk Utility it mounts, but then when I try to open the folder is says I do not have permissions. I have set rw, sync, and no_subtree_check. The user on the Mac is darren with a bunch of groups.

Do I need to have the same group and user set to access the folder?


Source: (StackOverflow)

Understanding NFS4 (Linux server)

I've been a bit bothered by NFS4 on Linux. Some information 'out there' seems to conflict with other information, and other information appears hard to find. So here are a couple of things that caught my attention, hopefully someone out there can shed some light on this.

This question focuses exclusively on NFS4 without Kerberos etc.

1. Exports

There is ambiguous information in the exports manpage on the structure of /etc/exports.

To quote from exports(5):

Also, each line may have one or more specifications for default options after the path name, in the form of a dash ("-") followed by an option list.

The option list is used for all subsequent exports on that line only.

What does "subsequent exports on that line only" mean?

1.2 fsid=0 not required anymore?

I was searching for fsid when I found a comment on the linux-nfs list stating fsid=0 is not required anymore. Now I'm just confused, do I need it with nfs4 or not?!

2. Non-exported directory still mountable

Say I have the following tree:

/exp
/exp/users
/exp/distr
/exp/distr/archlinux
/exp/distr/debian

And I have the following entries in this fstab entry:

/dev/disk/by-label/users  /mnt/users  ext4  defaults  0  0
/dev/disk/by-label/distr  /mnt/distr  ext4  defaults  0  0
/mnt/users                /exp/users  none  bind      0  0
/mnt/distr                /exp/distr  none  bind      0  0

And my exports is exactly this:

/exp       192.168.1.0/24(fsid=0,rw,async,no_subtree_check,no_root_squash)
/exp/distr 192.168.1.0/24(rw,async,no_subtree_check,no_root_squash)

And exportfs -arv shows:

exporting 192.168.1.0/24:/exp/distr
exporting 192.168.1.0/24:/exp

Then why am I able to do this and get no error on a client:

mount -t nfs4 server:/exp/users /tmp/test

Even though /exp/users is not exported? I didn't export this directory, and while I don't see the contents of /dev/disk/by-label/users unless I specify crossmnt, I am still able to write to the directory. Everything I write to there goes to the underlying directory of /exp/users which can be seen when I umount /exp/users; ls /exp/users..

3. The odd case of showmount -d server

As stated by rpc.mountd(8), this command should display directories that are either currently mounted by clients, or stale entries in /var/lib/nfs/rmtab, as can be read:

The rpc.mountd daemon registers every successful MNT request by adding an entry to the /var/lib/nfs/rmtab file. When receivng a UMNT request from an NFS client, rpc.mountd simply removes the matching entry from /var/lib/nfs/rmtab, as long as the access control list for that export allows that sender to access the export.

(...)

Note, however, that there is little to guarantee that the contents of /var/lib/nfs/rmtab are accurate. A client may continue accessing an export even after invoking UMNT. If the client reboots without sending a UMNT request, stale entries remain for that client in /var/lib/nfs/rmtab.

After reading this I surely wonder:

  1. Isn't it terribly insecure to just expose this type of client information;
  2. Aren't unaware server admins bound to have an rmtab with a lot of stale clients;
  3. Is this the reason that clients that mount nfs4 directories with mount -v get to see output like "nothing was mounted" even though something was mounted?

I have a lot of other questions regarding nfs4, but I'll keep it at this for the moment.. :)


Source: (StackOverflow)

exportfs: Warning: /home/user/share does not support NFS export

'exportfs -r' returns me this error when I'm trying to export /home/user/share (ext4):

exportfs: Warning: /home/user/share does not support NFS export.

/etc/exports:

/home/user/share 192.168.1.3 (rw,no_subtree_check)

The system is Ubuntu 10.04 with nfs-kernel-server package. Any ideas why this is happening? Is it because of ext4?


Source: (StackOverflow)

"Stale NFS file handle" after reboot

On the server node, it is possible to access an exported folder. However, after reboots (both server and client), the folder is no longer accessible from the clients.

On server

# ls /data
Folder1
Forlder2

and the /etc/exports file contains

/data 192.168.1.0/24(rw,no_subtree_check,async,no_root_squash)

On client

# ls /data
ls: cannot access /data: Stale NFS file handle

I have to say that there were no problem with the shared folder from client side however after reboots (server and client), I see this message.

Any way to fix that?


Source: (StackOverflow)

CentOS 6 + LDAP + NFS. File ownership is stuck on "nobody"

I've been trying to get LDAP authentication and NFS exported home directories on CentOS 6 working for a few days now. I've gotten to the point that I can now login to the client machine using the username and password in LDAP. On the client, /home and /opt are mounted in the fstab over NFS. However, every file in both /opt and /home is owned by nobody:nobody (uid: 99, gid: 99) on the client.

However my uid and gid appear to be properly set:

-bash-4.1$ id
uid=3000(myusername) gid=3000(employees) groups=3000(employees)

What else can I check? Here are some the config files on my client:

/etc/nsswitch.conf

passwd:     files sss
shadow:     files sss
group:      files sss

hosts:      files dns

bootparams: nisplus [NOTFOUND=return] files

ethers:     files
netmasks:   files
networks:   files
protocols:  files
rpc:        files
services:   files

netgroup:   files sss

publickey:  nisplus

automount:  files ldap
aliases:    files nisplus

/etc/sssd/sssd.conf

[sssd]
config_file_version = 2
services = nss, pam

domains = default
[nss]

[pam]


[domain/default]
auth_provider = ldap
ldap_id_use_start_tls = True
chpass_provider = ldap
cache_credentials = True
krb5_realm = EXAMPLE.COM
ldap_search_base = dc=mycompany,dc=com
id_provider = ldap
ldap_uri = ldaps://server.subdomain.mycompany.com
krb5_kdcip = kerberos.example.com
ldap_tls_cacertdir = /etc/openldap/cacerts

# Configure client certificate auth.
ldap_tls_cert = /etc/openldap/cacerts/client.pem
ldap_tls_key = /etc/openldap/cacerts/client.pem
ldap_tls_reqcert = demand

/etc/fstab

/dev/mapper/vg_main-lv_root /                       ext4    defaults        1 1
UUID=4e43a15d-4dc0-4836-8fa6-c3445fde756c /boot                   ext4    defaults        1 2
/dev/mapper/vg_main-lv_swap swap                    swap    defaults        0 0
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
storage1:/nas/home  /home  nfs   soft,intr,rsize=8192,wsize=8192
storage1:/nas/opt  /opt  nfs   soft,intr,rsize=8192,wsize=8192

authconfig output:

[root@test1 ~]# authconfig --test
caching is disabled
nss_files is always enabled
nss_compat is disabled
nss_db is disabled
nss_hesiod is disabled
 hesiod LHS = ""
 hesiod RHS = ""
nss_ldap is enabled
 LDAP+TLS is enabled
 LDAP server = "ldaps://server.subdomain.mycompany.com"
 LDAP base DN = "dc=mycompany,dc=com"
nss_nis is disabled
 NIS server = ""
 NIS domain = ""
nss_nisplus is disabled
nss_winbind is disabled
 SMB workgroup = ""
 SMB servers = ""
 SMB security = "user"
 SMB realm = ""
 Winbind template shell = "/bin/false"
 SMB idmap uid = "16777216-33554431"
 SMB idmap gid = "16777216-33554431"
nss_sss is disabled by default
nss_wins is disabled
nss_mdns4_minimal is disabled
DNS preference over NSS or WINS is disabled
pam_unix is always enabled
 shadow passwords are enabled
 password hashing algorithm is sha512
pam_krb5 is disabled
 krb5 realm = "EXAMPLE.COM"
 krb5 realm via dns is disabled
 krb5 kdc = "kerberos.example.com"
 krb5 kdc via dns is disabled
 krb5 admin server = "kerberos.example.com"
pam_ldap is enabled
 LDAP+TLS is enabled
 LDAP server = "ldaps://server.subdomain.mycompany.com"
 LDAP base DN = "dc=mycompany,dc=com"
 LDAP schema = "rfc2307"
pam_pkcs11 is disabled
 use only smartcard for login is disabled
 smartcard module = ""
 smartcard removal action = ""
pam_fprintd is enabled
pam_winbind is disabled
 SMB workgroup = ""
 SMB servers = ""
 SMB security = "user"
 SMB realm = ""
pam_sss is disabled by default
 credential caching in SSSD is enabled
 SSSD use instead of legacy services if possible is enabled
pam_cracklib is enabled (try_first_pass retry=3 type=)
pam_passwdqc is disabled ()
pam_access is disabled ()
pam_mkhomedir or pam_oddjob_mkhomedir is enabled ()
Always authorize local users is enabled ()
Authenticate system accounts against network services is disabled

Source: (StackOverflow)

chown on a mounted NFS partition gives "Operation not permitted"

I have a remote partition that i have mounted locally using NFS.

'mount' gives

192.168.3.1:/mnt/storage-pools/ on /pools type nfs (rw,addr=192.168.3.1)

On the server i have in exports:

/mnt/storage-pools   *(rw,insecure,sync,no_subtree_check)

Then I try

 touch /pools/test1
 ls -lah
 -rw-r--r--  1 65534 65534    0 Dec 13 20:56 test1
 chown root.root test1
 chown: changing ownership of `test1': Operation not permitted

What am I missing ? Pulling my hairs out.


Source: (StackOverflow)

NFS v3 versus v4

I am wondering why NFS v4 would be so much faster than NFS v3 and if there are any parameters on v3 that could be tweaked.

I mount a file system

sudo mount  -o  'rw,bg,hard,nointr,rsize=1048576,wsize=1048576,vers=4'  toto:/test /test

and then run

 dd if=/test/file  of=/dev/null bs=1024k

I can read 200-400MB/s but when I change version to vers=3, remount and rerun the dd I only get 90MB/s. The file I'm reading from is an in memory file on the NFS server. Both sides of the connection are Solaris and have 10GbE NIC. I avoid any client side caching by remounting between all tests. I used dtrace to see on the server to measure how fast data is being served via NFS. For both v3 and v4 I changed:

 nfs4_bsize
 nfs3_bsize

from default 32K to 1M (on v4 I maxed at 150MB/s with 32K) I've tried tweaking

  • nfs3_max_threads
  • clnt_max_conns
  • nfs3_async_clusters

to improve the v3 performance, but no go.

On v3 if I run four parallel dd's the throughput goes down from 90MB/s to 70-80MBs which leads me to believe the problem is some shared resource and if so, then I'm wondering what it is and if I can increase that resource.

dtrace code to get window sizes:

#!/usr/sbin/dtrace -s
#pragma D option quiet
#pragma D option defaultargs

inline string ADDR=$$1;

dtrace:::BEGIN
{
       TITLE = 10;
       title = 0;
       printf("starting up ...\n");
       self->start = 0;
}

tcp:::send, tcp:::receive
/   self->start == 0  /
{
     walltime[args[1]->cs_cid]= timestamp;
     self->start = 1;
}

tcp:::send, tcp:::receive
/   title == 0  &&
     ( ADDR == NULL || args[3]->tcps_raddr == ADDR  ) /
{
      printf("%4s %15s %6s %6s %6s %8s %8s %8s %8s %8s  %8s %8s %8s  %8s %8s\n",
        "cid",
        "ip",
        "usend"    ,
        "urecd" ,
        "delta"  ,
        "send"  ,
        "recd"  ,
        "ssz"  ,
        "sscal"  ,
        "rsz",
        "rscal",
        "congw",
        "conthr",
        "flags",
        "retran"
      );
      title = TITLE ;
}

tcp:::send
/     ( ADDR == NULL || args[3]->tcps_raddr == ADDR ) /
{
    nfs[args[1]->cs_cid]=1; /* this is an NFS thread */
    this->delta= timestamp-walltime[args[1]->cs_cid];
    walltime[args[1]->cs_cid]=timestamp;
    this->flags="";
    this->flags= strjoin((( args[4]->tcp_flags & TH_FIN ) ? "FIN|" : ""),this->flags);
    this->flags= strjoin((( args[4]->tcp_flags & TH_SYN ) ? "SYN|" : ""),this->flags);
    this->flags= strjoin((( args[4]->tcp_flags & TH_RST ) ? "RST|" : ""),this->flags);
    this->flags= strjoin((( args[4]->tcp_flags & TH_PUSH ) ? "PUSH|" : ""),this->flags);
    this->flags= strjoin((( args[4]->tcp_flags & TH_ACK ) ? "ACK|" : ""),this->flags);
    this->flags= strjoin((( args[4]->tcp_flags & TH_URG ) ? "URG|" : ""),this->flags);
    this->flags= strjoin((( args[4]->tcp_flags & TH_ECE ) ? "ECE|" : ""),this->flags);
    this->flags= strjoin((( args[4]->tcp_flags & TH_CWR ) ? "CWR|" : ""),this->flags);
    this->flags= strjoin((( args[4]->tcp_flags == 0 ) ? "null " : ""),this->flags);
    printf("%5d %14s %6d %6d %6d %8d \ %-8s %8d %6d %8d  %8d %8d %12d %s %d  \n",
        args[1]->cs_cid%1000,
        args[3]->tcps_raddr  ,
        args[3]->tcps_snxt - args[3]->tcps_suna ,
        args[3]->tcps_rnxt - args[3]->tcps_rack,
        this->delta/1000,
        args[2]->ip_plength - args[4]->tcp_offset,
        "",
        args[3]->tcps_swnd,
        args[3]->tcps_snd_ws,
        args[3]->tcps_rwnd,
        args[3]->tcps_rcv_ws,
        args[3]->tcps_cwnd,
        args[3]->tcps_cwnd_ssthresh,
        this->flags,
        args[3]->tcps_retransmit
      );
    this->flags=0;
    title--;
    this->delta=0;
}

tcp:::receive
/ nfs[args[1]->cs_cid] &&  ( ADDR == NULL || args[3]->tcps_raddr == ADDR ) /
{
    this->delta= timestamp-walltime[args[1]->cs_cid];
    walltime[args[1]->cs_cid]=timestamp;
    this->flags="";
    this->flags= strjoin((( args[4]->tcp_flags & TH_FIN ) ? "FIN|" : ""),this->flags);
    this->flags= strjoin((( args[4]->tcp_flags & TH_SYN ) ? "SYN|" : ""),this->flags);
    this->flags= strjoin((( args[4]->tcp_flags & TH_RST ) ? "RST|" : ""),this->flags);
    this->flags= strjoin((( args[4]->tcp_flags & TH_PUSH ) ? "PUSH|" : ""),this->flags);
    this->flags= strjoin((( args[4]->tcp_flags & TH_ACK ) ? "ACK|" : ""),this->flags);
    this->flags= strjoin((( args[4]->tcp_flags & TH_URG ) ? "URG|" : ""),this->flags);
    this->flags= strjoin((( args[4]->tcp_flags & TH_ECE ) ? "ECE|" : ""),this->flags);
    this->flags= strjoin((( args[4]->tcp_flags & TH_CWR ) ? "CWR|" : ""),this->flags);
    this->flags= strjoin((( args[4]->tcp_flags == 0 ) ? "null " : ""),this->flags);
    printf("%5d %14s %6d %6d %6d %8s / %-8d %8d %6d %8d  %8d %8d %12d %s %d  \n",
        args[1]->cs_cid%1000,
        args[3]->tcps_raddr  ,
        args[3]->tcps_snxt - args[3]->tcps_suna ,
        args[3]->tcps_rnxt - args[3]->tcps_rack,
        this->delta/1000,
        "",
        args[2]->ip_plength - args[4]->tcp_offset,
        args[3]->tcps_swnd,
        args[3]->tcps_snd_ws,
        args[3]->tcps_rwnd,
        args[3]->tcps_rcv_ws,
        args[3]->tcps_cwnd,
        args[3]->tcps_cwnd_ssthresh,
        this->flags,
        args[3]->tcps_retransmit
      );
    this->flags=0;
    title--;
    this->delta=0;
}

Output looks like ( not from this particular situation):

cid              ip  usend  urecd  delta     send     recd      ssz    sscal      rsz     rscal    congw   conthr     flags   retran
  320 192.168.100.186    240      0    272      240 \             49232      0  1049800         5  1049800         2896 ACK|PUSH| 0
  320 192.168.100.186    240      0    196          / 68          49232      0  1049800         5  1049800         2896 ACK|PUSH| 0
  320 192.168.100.186      0      0  27445        0 \             49232      0  1049800         5  1049800         2896 ACK| 0
   24 192.168.100.177      0      0 255562          / 52          64060      0    64240         0    91980         2920 ACK|PUSH| 0
   24 192.168.100.177     52      0    301       52 \             64060      0    64240         0    91980         2920 ACK|PUSH| 0

some headers

usend - unacknowledged send bytes
urecd - unacknowledged received bytes
ssz - send window
rsz - receive window
congw - congestion window

planning on taking snoop's of the dd's over v3 and v4 and comparing. Have already done it but there was too much traffic and I used a disk file instead of a cached file which made comparing timings meaningless. Will run other snoop's with cached data and no other traffic between boxes. TBD

Additionally the network guys say there is no traffic shaping or bandwidth limiters on the connections.


Source: (StackOverflow)

Unmount a nfs mount where the nfs server has disappeared

Server A used to be a NFS server. Server B was mounting an export of that. Everything was fine. Then A died. Just switched off. Gone. Vanished.

However that folder is still mounted on B. I obviously can't cd into it or anything. However umount /mnt/myfolder just hangs and won't umount. Is there anyway to umount it without restarting B?

Both client and server are Linux machines.


Source: (StackOverflow)

Files mounted over NFSv4 are owned by 4294967294, UIDs and GIDs match

I have two identical linux machines (identical images launched in amazon EC2) and I am trying to mount an exported directory over NFSv4. Here is what the mounted directory looks like on the client machine:

root@server:~# ls -l /websites/
drwxr-xr-x  6 4294967294 4294967294   92 2010-01-01 20:21 logs
drwxr-xr-x  2 4294967294 4294967294   20 2009-12-23 01:14 monit.d
...

I double checked to make sure that the UIDs were matching

Here is the mount command I run from the client

/sbin/mount.nfs4 $MASTER_DN:/ /websites -o rw,_netdev,async

And here is the /etc/exports entry on the server machine:

/websites 10.0.0.0/8(fsid=0,no_subtree_check,rw,no_root_squash)

Source: (StackOverflow)

mount.nfs: access denied by server while mounting

On my Ubuntu system, I have this line in /etc/fstab:

myserver:/home/me /mnt/me nfs rsize=8192,wsize=8192,timeo=14,intr

When I do

sudo mount -a

I get:

mount.nfs: access denied by server while mounting myserver:/home/me

How can I diagnose this problem? The nfs server is also Ubuntu.

Additional details: I am able to mount this nfs share from other Ubuntu clients on the same network with no problem. However, the problematic client is different in that it is running inside VirtualBox on a Windows system. I can ping "myserver" fine from the problematic client.

EDIT: /etc/exports on "myserver":

/home/me *(rw,all_squash,async,no_subtree_check,anonuid=1000,anongid=1000)

/etc/hosts.allow and /etc/hosts.deny on "myserver" are both all comments. And keep in mind, that I can connect fine from other clients on the same network.


Source: (StackOverflow)

Best filesystem choices for NFS storing VMware disk images

Currently we use an iSCSI SAN as storage for several VMware ESXi servers. I am investigating the use of an NFS target on a Linux server for additional virtual machines. I am also open to the idea of using an alternative operating system (like OpenSolaris) if it will provide significant advantages.

What Linux-based filesystem favours very large contiguous files (like VMware's disk images)? Alternatively, how have people found ZFS on OpenSolaris for this kind of workload?

(This question was originally asked on SuperUser; feel free to migrate answers here if you know how).


Source: (StackOverflow)