You can execute the following commands on a domain controller helping you debugging it :
ipconfig /all > c:\ipconfig.txt (from each DC/DNS Server)
dcdiag /v /c /d /e /s: > c:\dcdiag.txt
dcdiag /test:dns /s: /DnsBasic > c:\dcdiag-dnsbasic.txt
repadmin /showrepl dc* /verbose /all /intersite > c:\showrepl.txt (dc* is a placeholder for the starting name of the DCs if they all begin the same - if more then one DC exists)
repadmin /replsum > c:\replsum.txt
You can open "Active Directory Users and Computers" management console with a specific domain with the following shortcut :
You can force the "Active Directory Users and Computers management console" with 32 Bits with the following shortcut :
Check the MTU size configuration on the port group which is designated as the NFS VMkernel port group. If it is set to anything other than 1500 or 9000, test the connectivity using the vmkping command:
# vmkping -I vmkN -s nnnn xxx.xxx.xxx.xxx
* vmkN is vmk0, vmk1, etc, depending on which vmknic is assigned to NFS.
* Note: The -I option to select the vmkernel interface is available only in ESXi 5.1. Without this option in 4.x/5.0, the host will use the vmkernel associated with the destination network being pinged in the host routing table. The host routing table can be viewed using the esxcfg-route -l command.
* nnnn is the MTU size minus 28 bytes. For example, for an MTU size of 9000, use 8972. 20 bytes are used for ICMP, and 8 bytes are used for IP Header Overhead.
* xxx.xxx.xxx.xxx is the IP address of the target NFS storage.
To reveal the vmknics, run the command:
Check the output for the vmk_ interface associated with NFS.
Use netcat (nc) to see if you can reach the NFS server nfsd TCP/UDP port (default 2049) on the storage array from the host:
# nc -z array-IP 2049
Connection to 10.1.10.100 2049 port [tcp/http] succeeded!
Note: The netcat command is available with ESX 4.x and ESXi 4.1 and later.
* Verify that the ESX host can vmkping the NFS server. For more information, see Testing VMkernel network connectivity with the vmkping command (1003728).
* Verify that the NFS host can ping the VMkernel IP of the ESX host.
* Ensure that you can ping and vmkping the NFS server address.
* Ensure that any firewalls between the ESX host and NFS Server do not block the connection.
* Ensure the access on the NFS server is set to Anonymous user, Root Access (no_root_squash), and Read/Write.
* Try to restore the mount with this command:
# esxcfg-nas -r
* Check to see if the datastore has mounted after trying to restore it with:
# esxcfg-nas -l
* If it is not mounted, try remounting it using this command:
# esxcfg-nas -a datastore_name -o nfs_server_hostname/ip -s mount_point
* Alternatively try to add the datastore again using the vSphere client:
* For security reasons RPC protocol filtering is enabled and the connection is refused. Open firewall ports on the network for RPC protocol and check the physical switch for any RPC protocol filtering.
* An ESXi/ESX host cannot mount a NFS datastore.
* The /var/log/messages (ESXi) or /var/log/vmkernel (ESX) log files contain errors similar to:
* Jun 15 13:01:39 esx-02 vmkernel: 2:13:51:38.221 cpu2:4287)WARNING: NFS: 201: Got error 13 from mount call
Jun 15 13:01:39 esx-02 vmkernel: 2:13:51:38.221 cpu9:4262)WARNING: NFS: 944: MOUNT failed with MOUNT status 13 (Permission denied) trying to mount Server (192.168.10.10) Path (/opt/esx-mounts)
* The vobd.log file (located at /var/log/) on the ESXi 5.x host contains the errors:
* [esx.problem.vmfs.nfs.mount.error.perm.denied] NFS mount ip-address:mountpoint failed: The mount request was denied by the NFS server. Check that the export exists and that the client is permitted to mount it.
Source : http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003728
In the command shell, run the command:
# vmkping x.x.x.x
where x.x.x.x is the hostname or IP address of the server that you want to ping.
If you have Jumbo Frames configured in your environment, run the vmkping command with the -s and -d options.
# vmkping -d -s 8972 x.x.x.x
In ESXi 5.1, you can specify which vmkernel port to use for outgoing ICMP traffic with the -I option:
# vmkping -I vmkX x.x.x.x
Source : http://www.sysadmintutorials.com/vmware-nfs-all-paths-down-apd-update-esxi-5-5-update-1/
Download VDDK from VMware site. It’s free. And then run vmware-mount with the following keys:
vmware-mount driveletter: "[vmfs_datastore] vmname/diskname.vmdk" /i:"datacentername/vm/vmname" /h:vcname /u:username /s:password
To unmount a vmdk run :
vmware-mount diskletter: /d
Note however, that you can mount only vmdks from powered off VMs. But there is a workaround. You can mount vmdk from online VMs in read-only mode if you make a VM snapshot. Then the original vmdk won’t be locked by ESXi server and you will be able to mount it.