22 Replies Latest reply: Apr 15, 2009 11:56 AM by loliverone RSS

Using NFS over iSCSI for VMware access

rajdeepsengupta
Currently Being Moderated

We have recently moved some of our application server to Vmware ESX server. For the datastore we have used Netapp filer. I have NFs license on Netapp system, so I was wondering about what we should use to access the datastore, I mean whether we should use iSCSI or NFS. It seems majority of people uses iSCSI and even Vmware engineer suggested that, while Netapp saya that on NFS the performance is as good as iSCSI, and in some cases it is better. Also I knew that Netapp NFS access is really very stable and perfornace freidnly, so I have choosed NFS to access the datastore. The additional advantage which I have got is that I donot need any snaprestore or any other license to restore the backup. Becuase in NFS, we have all the snapshot copies accessible directly under .snpshot directory, so we have created scripts which takes snapshot every 15 nins for very critical servers, so anytime if I have an issue, I can get the last 15min snapshot copy, and make it a production copy in a mater of seconds.

 

Now my question is have any customer have done some real testing to prove that NFS access is atleast equivalne tot iSCSI performance if not better. Becuase in times to come, the load on our application server will increase only, so I want to make sure that my decision was correct.

  • Re: Using NFS over iSCSI for VMware access
    karl.pottie
    Currently Being Moderated

     

     

     

     

    We have done testing and are using NFS in a production VMWare environment.

     

     

    See my presentation for details :

     

     

    Unfortunately, the VMWare EULA doesn't allow me to publish the actual benchmark results, but you can be sure that NFS performance is at least as good as iSCSI.

     

     

     

     

     

  • Re: Using NFS over iSCSI for VMware access
    danpancamo
    Currently Being Moderated

     

    We have over 1000 VMs on 35 ESX host on 2 fas3070's.... All over NFS..     Running production since late 2006

     

     

     

     

     

     

     

     

    • Re: Using NFS over iSCSI for VMware access
      charlesgillanders
      Currently Being Moderated

      Hi,

       

      You mentioned that you were running VMware and using Netapp & NFS for storage.  I've been doing some benchmarks in the last week or so trying to tie down how we'll manage an migration from Microsoft Virtual Server to VMware ESXi.

       

      I've been unable to replicate the stated results that NFS performs at least as well as iSCSI.  I'm using a simple hard disk tuning test (http://www.hdtune.com/) and using small block sizes (4k) I can get around 40Mbps using iSCSI, using NFS on the same filer I can only get 10% of that throughput.  The situation is even worse when using larger block sizes (512k) - here iSCSI can hit 200-300 Mbps and NFS struggles to reach 14Mbps.

       

      I was wondering if you had any suggestions as to where I might look - I've searched around on the net and adjusted a "hidden" NFS option for tcp receive window size (which had no effect) but other than that all I can see from netapp is lots of white papers saying look it works.....

       

      Thanks.

       

      Charles

    • Re: Using NFS over iSCSI for VMware access
      loliverone
      Currently Being Moderated

      Could you share your filer configuration with me.  How many volumes do you use, sizes, no# of aggregates/rgs and disks within aggrs.

  • Re: Using NFS over iSCSI for VMware access
    strattonfinance
    Currently Being Moderated

    We're using NFS for our ESX datastores and it's fantastic - so much easier than iSCSI, and faster as well.

    In the early sages I rang some basic performance tests to compare several different flavours of iSCSI (MS initiator in VM, ESX VMFS, ESX RDM) and NFS. These were performed using IOMeter with some of the test configurations supplied in the SAN performance thread on the VMWare forums.

    The results showed that one of the flavours of iSCSI (ESX RDM I think) was marginally faster than NFS in sequential read, but for everything else - random read, random write, mixed, etc - NFS was in all cases faster tham iSCSI, sometimes by as much as 20%.

    All this was performed from a Windows 2008 x64 VM running on an un-loaded dual quad-core Intel box against a FAS2050C.

    Hope that helps.

    • Re: Using NFS over iSCSI for VMware access
      mheimberg@bnc.ch
      Currently Being Moderated

      Hi Mathew

      We are just starting a migration of VMFS-datastores over FCP on a competitors storage system to a NFS datastore on NetApp.

      Now there are concerns about performance. So we have to make tests before and after.

      Could you provide me with some more links or configs of IOMeter what and how exactly  you did your tests?

       

      Thank you very much for your support.

       

      regards

      markus

      • Re: Using NFS over iSCSI for VMware access
        radek.kubka
        Currently Being Moderated

        Markus,

         

        There is a second bottom to that.

         

        Any LUN with VMFS data store on it can suffer from one problem - LUN-wide SCSI reservations. They occur when e.g. a VM is started or stopped. What it means is at that particular time no ESX hosts can access this LUN. With a handful of ESX hosts & fairly static environment (e.g. every VM always running) it not necessarily impacts the performance significantly, yet in some other scenarios it can pose a problem.

         

        And guess what? NFS doesn't do any LUN-wide SCSI locks, as there is no LUN! (all locks are done on a file level)

         

        The issue described above cannot be measured by simple disk throughput test - only something almost equal to a real environment with all its characterists (i.e. number of ESX hosts & VMs, usage patterns, etc.) can deliver some meaningful results.

         

        Regards,

        Radek

        • Re: Using NFS over iSCSI for VMware access
          mheimberg@bnc.ch
          Currently Being Moderated

          so I am really glad that we move to NFS now ....

          Markus

          • Re: Using NFS over iSCSI for VMware access
            igeeksystems
            Currently Being Moderated

            Think about cost from NetApp perspective, why iSCSI is free but NFS is licensed?  That's something to think about performance and administrative standpoints, both should be differentiated between 10-15% in performance ratio.  So if you implemented low budget storage solution why not use iSCSI.  Unfortunately, we're using FAS3070s series for high end enterprise applications and it rocks with NFS and uses SnapMirror for DR replication. Thanks for the benchmark details and good guide on simulator as well.

            • Re: Using NFS over iSCSI for VMware access
              nicholas4704
              Currently Being Moderated

              Agree. For low end systems iSCSI is better (cheaper)

               

              Acctually both approaches are good.

              Unfortunately NFS license costs money and for Tier 4 or Tier 5 system it is pretty expensive (especially if you use cluster and you ussually do)


              There are several cons and pros for everything.

              Some NFS provisioning benefits could be get by LUN provisioning and dedup in iSCSI world. Easy restores from snapshots (NFS) vs lun cloning (iSCSI)

              iSCSI does have multipathing, NFS doesn't.

              You store data in vmdk on NFS, so you'll need VMWare to get your data.

               

              IMHO VMWare company likes iSCSI better. And I heard that iSCSI is better from VM guy.

              Netapp NFS stack is really good so I think if you have a large number of VMs you can go for it.

               

              P.S.

              VM snapshots did not work for NFS. But I think it was fixed recently in VMware

              • Re: Using NFS over iSCSI for VMware access
                igeeksystems
                Currently Being Moderated

                You're absolutely right, it has some pros and cons on both protocols and it really depends how large your environment and what types of features you wanted to be part of and actually reading many blogs regarding iSCSI is not a bad solution at all especially using NetApp gear as we all know.  Matter of fact, I have my dev/test VMware cluster using iSCSI and it runs great and save some money.

              • Re: Using NFS over iSCSI for VMware access
                mheimberg@bnc.ch
                Currently Being Moderated

                In some points I can not follow:

                 

                >Some NFS provisioning benefits could be get by LUN provisioning and dedup in iSCSI world.

                With NFS I get my space transparently and immediately back, the view from the storage is the same as from ESX, isn't it?

                 

                >Easy restores from snapshots (NFS) vs lun cloning (iSCSI)

                The point is, that I can enter the snapshot-directory directly and copy out the needed files, eg. to compare two different *.vmx. With iSCSI one must clone the LUN, map it, rescan on ESX to get the new store.... so NFS is much more admin-friendly.

                 

                >iSCSI does have multipathing, NFS doesn't.

                Right, use IP-Aliases for different stores.

                 

                >You store data in vmdk on NFS, so you'll need VMWare to get your data.

                Uh? And in an iSCSI-LUN-Datastore you don't?

                In fact there are more ways to get to the data that is inside a vmdk when it is stored on NFS: you could mount the volume from a third machine (I am not saying everyone should do that all the time -get me right), copy the needed files away from active file system or snapshot, then there are tools to mount and interpret vmdk-files, even with an NTFS in it.

                In contrast to iSCSI here is at least one more step: you need a tool to interpret VMFS before you get your vmdk.

                 

                So, my favourit is still NFS because it is so much simpler in handling and does not lack of performance in small to medium business where I have seen and applied it.

                 

                regards

                Markus

                • Re: Using NFS over iSCSI for VMware access
                  Currently Being Moderated

                  For a pretty exhaustive list of NFS & VMware benefits, see this post.

                   

                  http://viroptics.pancamo.com/2007/11/why-vmware-over-netapp-nfs.html

                   

                  To me the top ones are....

                   

                  • easier administration (can have bigger NFS datastores due to no locking issues and ease of FlexVol growth/shrink)
                  • deduplication integration -- this is HUGE....so you can use dedup with FC or iSCSI but the whole thin provisioning/fractional reservation/etc. stuff makes it a pain...with NFS the freed up space just shows up in VMware
                  • snapshot integration with or without SMVI (you get crash consistent vmdk snapshots without SMVI and even better with SMVI...it's incredibly cool to be able to roll a VM forward and backwards without VMware level snapshots and their resulting overhead)
                • Re: Using NFS over iSCSI for VMware access
                  nicholas4704
                  Currently Being Moderated

                  Markus, Andrew, I'm neigher on iSCSI side nor NFS. I'm on Netapp side, and I understand it is not perfect.

                  Lets say iSCSI is more common on systems I installed.

                  But iSCSI is not so bad, especially on Netapp. Netapp technologies work for iSCSI.

                   

                  >Some NFS provisioning benefits could be get by LUN provisioning and dedup in iSCSI world.

                  With NFS I get my space transparently and immediately back, the view from the storage is the same as from ESX, isn't it?

                   

                  ++++Agree. Storage view is not so transparent, but you get near the same result!

                  The worst thing with iSCSI is that LUN never decreases in size. That really bad. But does vmdk decreease if I delete some data from host OS?

                  >Easy restores from snapshots (NFS) vs lun cloning (iSCSI)

                  The point is, that I can enter the snapshot-directory directly and copy out the needed files, eg. to compare two different *.vmx. With iSCSI one must clone the LUN, map it, rescan on ESX to get the new store.... so NFS is much more admin-friendly.

                   

                  ++++Yes, but usually vmdk is pretty large itself (gigabytes). So that copyng takes plenty of time. LUN cloning can be done in seconds.

                  FlexClone license (or SnapRestore file restore maybe) solves the issue and you need just export the clone, but, again, it costs money.

                  >You store data in vmdk on NFS, so you'll need VMWare to get your data.

                  Uh? And in an iSCSI-LUN-Datastore you don't?

                  In fact there are more ways to get to the data that is inside a vmdk when it is stored on NFS: you could mount the volume from a third machine (I am not saying everyone should do that all the time -get me right), copy the needed files away from active file system or snapshot, then there are tools to mount and interpret vmdk-files, even with an NTFS in it.

                  In contrast to iSCSI here is at least one more step: you need a tool to interpret VMFS before you get your vmdk.

                   

                  ++++ I mean to keep data on NTFS LUNs, not VMFS, to be able mount it to any other Windows and get data running immediately.

                   

                   

                  Andrew, thank you for a good link.

                   

                  But I think you still need SMVI to get VMs consistent. And SMVI just hangs with NFS. (as I mentioned it should be fixed in recent VMWare update)

                  Datastore size increase could be pain with iSCSI sometimes, i.e. when extent size is less than desired vmdk size. In other situations it works ok.

                   

                  So the main disadvantage for is NFS is price.... NFS license is one of the most expensive and in Windows environemnts it will be used for VMware only.

                  It is not a problem in big bussineses and with large number of VMs I would go for NFS. With low end and mostly Windows hosts I'll go for iSCSI.

                  • Re: Using NFS over iSCSI for VMware access
                    mheimberg@bnc.ch
                    Currently Being Moderated

                    Hi Nikolajs

                     

                    But iSCSI is not so bad, especially on Netapp. Netapp technologies work for iSCSI.

                    of course it does, for Netapp is one of the inventors

                     

                    I mean to keep data on NTFS LUNs, not VMFS, to be able mount it to any other Windows and get data running immediately.

                    I still don't get the point, but I am sure you are doing a good job

                     

                    So the main disadvantage for is NFS is price

                    this is for sure - sometimes I don't understand the marketing/financing guys at Netapp....

                     

                    Markus

                    • Re: Using NFS over iSCSI for VMware access
                      nicholas4704
                      Currently Being Moderated

                      I mean to keep data on NTFS LUNs, not VMFS, to be able mount it to any other Windows and get data running immediately.

                      I still don't get the point, but I am sure you are doing a good job

                       

                      Hi Markus!

                       

                      How do you supply storage for application data to your VMs?

                      I mean where is, for example, your MS SQL database? Is it on another vmdk on NFS and supplied as virtual disk to host OS?


                      Your opinion is important to me as long my VMware+NFS experience is not so big. What is the best practice for data disks?

                       

                      Actually we can use mixed environment. NFS for host OSes and iSCSI RDMs for data.

                       

                      I ussually use RDMs or host OS initiators to store data, thus I have NTFS (not VMFS) LUNs and I can access data from anywhere if I needd. For example reconfigure my laptop as SQL server and connected database via iSCSI.

                       

                      Nikolajs

                      • Re: Using NFS over iSCSI for VMware access
                        mheimberg@bnc.ch
                        Currently Being Moderated

                        Hi Nikolajs

                         

                        How do you supply storage for application data to your VMs?

                        First  of all a little background info: we supply NetApp and VMWare ESX to small and medium business, means some hundreds of users, mailboxes, SQL databases just a handfull with some dozens of GB - so not the very big stuff.

                         

                        In those environments we made very good experiences with this setup:

                        - ESX datastores connected by NFS for the sake of simplicity

                        - virtualized SQL or Exchange servers use Microsofts software iSCSI initiator and connect through the vSwitch to dedicated LUNs on Netapp

                        - we avoid the use of RDM, again because of simpler manageability and greater flexibility

                         

                        So we use the ESX datastore only to store the "system disk", everything else is on dedicated volumes and LUNs.

                        At a first sight this may sound a bit weired, but it has some advantages:

                        - once you got the principle it is very simple to build and manage

                        - use of SnapManagers (ok: with RDM also possible)

                        - tansparent use of all the components: a volume with a SQL-LUN is attached to SQL server, no other components in between like a "mapping disk" for RDM

                        - the LUNs might be easily connected to another server when needed or - with Flexclone - might be used for something else (migration, test, develop, etc)

                         

                        Again: this is our best practice, established in small to midsize (btw: I am Swiss, and in our small country a company with 500 employees is already a "medium" company - just to get the scale )

                         

                        Regards

                        Markus

                        • Re: Using NFS over iSCSI for VMware access
                          radek.kubka
                          Currently Being Moderated
                          First  of all a little background info: we supply NetApp and VMWare ESX to small and medium business, means some hundreds of users, mailboxes, SQL databases just a handfull with some dozens of GB - so not the very big stuff.

                          In those environments we made very good experiences with this setup:

                          - ESX datastores connected by NFS for the sake of simplicity

                          - virtualized SQL or Exchange servers use Microsofts software iSCSI initiator and connect through the vSwitch to dedicated LUNs on Netapp

                          - we avoid the use of RDM, again because of simpler manageability and greater flexibility

                           

                          Hi Markus,

                           

                          I assume from what you wrote that you are dealing with FAS2050A quite frequently (aimed at SMBs). I have constant design struggle with this box & was wondering what's your take on this.

                           

                          My concerns are as follows:

                          - FAS2050A has on board 4x 1Gbit IP & 4x 4Gbit FC ports plus a couple of expansion slots

                          - if we use NFS and/or iSCSI for storage connectivity & CIFS for flat files, these two types of traffic should be separated, so 4 IP ports (minimum number for fully redundant connectivity) are not enough

                          - if we add a couple of dual-port NICs to cure the problem above, no expansions slots are left and:

                          * adding external disk shelves (up front or in the future) means consuming onboard FC ports for back-end cabling = no option for running FC host connectivity if required at some point

                          * even if we are not bothered by the lack of front-end FC ports - no multipath (back-end) cabling is possible when mixing SATA & FC shelves (too few back-end ports)

                           

                          Any thoughts? Implementation examples? Ingenoius workarounds? ;-)

                           

                          One fairly obvious approach would be not to use expansion NICs & install backend HC HBA instead, but that would mean mixing all IP traffic on the same physical ports.

                           

                          One more thought - even if hosts are not using FC, some ports come handy for connecting tape library for NDMP backup.

                           

                          Regards,

                          Radek

                          • Re: Using NFS over iSCSI for VMware access
                            nicholas4704
                            Currently Being Moderated

                            I'm not Markus but...

                            IMHO of course

                            I don't like 2050... it is almost the same as FAS2020 (same CPU, more memory and 1 expansion slot), but more expensive.

                            FAS3140 is a piece of hardware that really kicks ass. If you can afford it go for it. With 3140 you can solve almost every business need.

                             

                            Talking about 2050A I would plan what do I need in the future. SATA or FC, iSCSI or FCP, etc

                            You can mix internal SATA with external multipathed FC without addon card or vice versa. So if 20 SATA disks are not enough then...

                             

                            Netapp equipment is pretty reliable, so ethernet vifs (trunks) are not obligatory.

                            Ussually the whole switch dies rather one port or NIC on Netapp.

                             

                            Anyway in case of disaster you can switch cluster.

                             

                            Another way to go could be to trunk 2 interfaces and run 2 VLANs there. You'll get redundancy and separate traffic on 2 interfaces. But I've never tried this configuration.

                             

                            So, I would go for 3140. For 2050 If trunk is a business need, then go for additional NIC and for SATA...

                             

                            For SATA...  if you need additional SATA shelf, buy another FAS2020 bundle with SATA, it should be available soon! Damn cost effective.

                             

                            Nick

                          • Re: Using NFS over iSCSI for VMware access
                            mheimberg@bnc.ch
                            Currently Being Moderated

                            Hi Radek

                             

                            You point directly to the week points of the FAS2050!

                            In fact it is even worse: there are only 2 onboard GBE! So you have to take a carefull decision wether you would like

                            - more IP connectivity for redundancy or separation of traffic

                            - an additional FC HBA for FC SAN or tape device

                            - or an SCSI-HBA for tape

                             

                            We bypass the limitations of the system by separating services to more systems: eg. a 2050HA one head serving NFS, other head serving iscsi, then a second 2050HA one head for CIFS, and one standby....is still cheaper than 3140 but gives more systems to maintain...there is no golden rule, because every customer as is own needs and budgets, sorry.

                             

                            BTW: this is OT, maybe you could open another thread?

                             

                            Regards

                            Markus

More Like This

  • Retrieving data ...