4 Replies Latest reply: Aug 22, 2013 7:20 AM by abknoc RSS

FAS3140 CIFS/NFS/ISCSI connection issue

abknoc
Currently Being Moderated

Hey guys,

 

We have a problem with our NFS mounts on VMWare.  Our environment now:

 

2 x FAS3140 HA cluster with 10gig NIC cards, ONTap version 8.1.2

5 x ESX hosts all running 1 gig NICs

 

Intermittently the NFS storage on the ESX hosts will go out.  This will only happen on random ESX hosts.  The outage that occurs is not consistent with ESX host and time.  We had just upgraded from a 4 port 1 gig NIC to a 2 port 10 gig port NIC.  When we had the 1 gig NIC, there were no problems but with the 10gig, we are having all sorts of little problems.

 

Our RC file on Netapp looks like this:

 

ifgrp create single vif e4a e4b

vlan create vif 67 80

ifconfig vif 172.67.1.35 netmask 255.255.0.0 partner vif

ifconfig vif-67 172.67.1.37 netmask 255.255.0.0 partner vif-67

ifconfig vif-80 172.80.1.21 netmask 255.255.0.0 partner vif-80

route add default 172.67.1.222 1

routed on

options dns.domainname our.domain.net

options dns.enable on

options nis.enable off

savecore

 

Currently:

Vif is serving CIFS

Vif-67 is serving NFS

Vif-80 is serving ISCSI

e4a and e4b are our 10gig cards

e0a and e0b are our 1 gig cards and are not being used right now.

 

We are not using e0a and e0b because we can not figure out how the RC file configuration has to be to serve the CIFS over that and have NFS/ISCSI over the e4a and e4b.  Any config we do, the Netapp will not respond over the ports e0a and e0b.

 

Can anyone shed some light or tell me what I am missing?  Any help would be greatly appreciated.

More Like This

  • Retrieving data ...