9 Replies Latest reply: May 9, 2012 12:05 PM by Francois Egger RSS

OnCommand with isolated vFilers

DAIRYMILK
Currently Being Moderated

OK, I’m having my first foray into multi-tenancy using vFilers and non-default ipspaces.  Everything is going well so far expect for one thing.  I shall explain…

 

We have a FAS3240 HA pair with OnTap 8.0.2 running several vFilers.  Each vFiler currently has two interfaces, one on an iSCSI network and one on an NFS network.  There are separate iSCSI/NFS networks for each tenancy/vFiler.

 

The only things that can communicate with the vFilers directly are servers that have interfaces on any of those storage networks, in our case either Windows servers running SQL (iSCSI) or ESXi hosts (NFS) in the same tenancy as the vFiler.

 

Nothing else can access the vFilers.  As  you’d expect, each vFiler is only accessible to those servers mentioned above that are in the same tenancy.  So, a SQL server from tenancy 1 cannot access the iSCSI network on tenancy 2.

 

The issue I’m having is to do with SMVI, whether it’s SMVI, VSC or OnCommand host package.  We have a management infrastructure from which you can manage the physical FAS controllers but you cannot directly access the vFilers which are in customer tenancies.  We have network restrictions in place so you cannot access one tenancy from another.  However, I need to be able to use OnCommand/SMVI/VSC to back up VMs residing on those vFilers.

 

As far as I know, the only way I can do this is if the OnCommand server can directly access the vFilers.  I have tested this and also had it confirmed through technical documentation and NetApp support.  If you want to backup the VMs, the vFiler has to be accessible to the backup server.  Seems pretty obvious really.

 

However, this is not possible with our current configuration.  The OnCommand server does not have access to the vFiler that hosts volumes and LUNs for a customer tenancy as the only interfaces that vFiler has are on the storage network in that tenancy (which also caused a deviation from SnapDrive best practice as SnapDrive communication is happening over the iSCSI network as that is the only interface the server has to the vFiler as there is not even a management network within the customer tenancy.  But that's a side issue).

 

If I add an additional interface to each vFiler, then allow those interfaces access to our management network, does this compromise the security of the vFilers?  The management server will have access into each vFiler and therefore will be breaching the current inter-tenancy restrictions so our security guy really wants to understand the implications here.  Would the management server have access to the NFS/iSCSI data on the “private” vFiler?  I know you can block protocols at the interface level  on the physical filer but does that extend to the vFilers as well?
All we want to be able to do is initiate SMVI level backups of the VMs on those vFilers.  Technically we have access to the vFiler already through the physical filer but to the actual data.

 

To add to the confusion, our support partner has told us that OnCommand should be able to backup the VMs even if it only has access to the physical filer and not the vFilers, which is not what I believe to be the case.

 

So many questions…  Any advice gratefully received.

 

P.S. I've added a (very) crude diagram of how it would look...

  • Re: OnCommand with isolated vFilers
    scottgelb
    Currently Being Moderated

    Are each of the vFilers in their own IPspace?  If the same IPspace, then an easy solution is to IP Alias the e0M interface so that each vFiler has an IP assigned to e0M... then use interface.blocked options to block protocol.  If different IPspaces, then you need an interface for management in that IPspace which could add more ports.  You could create a separate VLAN for management too and block iscsi/nfs to those ports too for management only.

    • Re: OnCommand with isolated vFilers
      DAIRYMILK
      Currently Being Moderated

      Hi Scott,

       

      Each vFiler is in its own ipspace so I guess we will need to add another interface for each vFiler.  We already have a separate management VLAN but the thing I was worried about was exposing customer tenancy data to each other via the management network.  I've slightly update my (very) crude diagram to show how it will look logically.

      vFiler.png

       

      We are subject to very tight security restrictions due to the nature of our customers and our security must be guaranteed.  I know that this configuration should be OK but I really need to understand whether it will stand up in a security assessment and penetration testing.  Having this joint management network creates a bridge between tenancies and I need to know if this can be exploited in any way.

       

      Sorry to sound paranoid!

       

      Btw, I've had a look and it seems you cannot block protocols on the vFiler interfaces, only at the physical controller level.  Is that the correct/only way to do it?

      • Re: OnCommand with isolated vFilers
        abuchmann
        Currently Being Moderated

        Hi Guys

         

        First of all I have to appologize for hijacking DIARYMILK's thread, but I think that I have to find a solution for exact the same challenge. Do you also plan to use the Protection Manager integration in SnapDrive?

         

        We also have different networks (separated via MPLS/VPN) for each customer. In some VPNs there are SQL-Servers with NetApp iSCSI-Storage and SMSQL-Installed. I would like to archive (vault) the SMSQL-created backups using the Protection Manager, installed on the OnCommand-Server.

         

        The following picture show's the architecure:

        StorageVRFSchema.png

         

        As the network are logically splitted and the security policy forbids open ports between the networks, I'm still looking for a way that smsql/snapdrive is able to initiate an update of the snapvault-relationship or better, the pm-dataset.

        So if there's solution for DAIRYMILK's problem, I would also greatly appreciate it.

      • Re: OnCommand with isolated vFilers
        scottgelb
        Currently Being Moderated

        The icsa and matasano security papers do a good job showing vFiler security. But of a network is shared that may be an issue. Although you could block all iscsi and nfs on the ports to that network.

         

        Sent from my iPhone 4S

         

        Re: OnCommand with isolated vFilers

        created by DAIRYMILK in OnCommand Mgmt Software - View the full discussion

        Hi Scott,

         

         

         

        Each vFiler is in its own ipspace so I guess we will need to add another interface for each vFiler.  We already have a separate management VLAN but the thing I was worried about was exposing customer tenancy data to each other via the management network.  I've slightly update my (very) crude diagram to show how it will look logically.

         

        https://communities.netapp.com/servlet/JiveServlet/downloadImage/2-80600-15484/450-251/vFiler.png

         

         

         

        As long as I block the iscsi/nfs protocols on the interfaces at the physical filer level, are my customers still kept totally separate?  We are subject to very tight security restrictions due to the nature of our customers and our security must be guaranteed.  I know that this configuration should be OK but I really need to understand whether it will stand up in a security assessment and penetration testing.  Havig this joint management network creates a bridge between tenancies and I need to know if this can be exploited in any way.

         

         

         

        Sorry to sound paranoid!

         

        Reply to this message by replying to this email -or- go to the message on NetApp Community

        Start a new discussion in OnCommand Mgmt Software by email or at NetApp Community

         

        • Re: OnCommand with isolated vFilers
          DAIRYMILK
          Currently Being Moderated

          The Matasano paper makes for interesting reading but I think you're right in that the fact we're sharing a network is going to be the sticking point.  I've put this to our security guy to see what he thinks but I feel it's going to be a problem.  If it is, I really don't know what to do about it.

           

          Adrian - yes, we use PM/SnapDrive integration and will therefore have the same issue as you.  I've only set it up for the SQL cluster in the management tenancy so far and that works because SnapDrive has access to the DFM server as they're in the same VLAN.  However, when we come to customer SQL, we will have the same issue as they will not have access to DFM.

           

          Secure multi tenancy is great but does introduce some challenges!

More Like This

  • Retrieving data ...