1 2 3 26 Previous Next

NetApp for Microsoft Environments

381 Posts

Data ONTAP SMI-S Agent 5.1 can be used to provision and manage storage with Windows Server 2012 and above . To enable this feature in Windows Server 2012 R2 the "Windows Standards Based Storage Management" feature needs to be added from the server manager. Storage provisioning and management can be done using server manager and also powershell cmdlets.

 

In this blog post i would show you the steps you can follow to configure Data ONTAP SMI-S agent 5.1 with Windows server 2012 R2 to effectively manage your windows server storage environment.

 

Note :- Prerequisites require the installation and configuration of Data ONTAP SMI-S agent 5.1 on another windows server.

 

Ok so lets start by installing the windows feature "Windows Standards Based Storage Management", this can be done via powershell or using the server manager.

 

Launch the "Add roles and features wizard" and select the "Windows Standards Based Storage Management" feature and confirm the installation, wait for the installation to complete.

 

1.png

 

You can also install the feature using powershell using the below cmdlet.

 

Add-WindowsFeature WindowsStorageManagementService

 

Once the feature installation is completed we also get a new module "smisconfig" in powershell which can be used to search, register and unregister the smi-s provider.

 

1.png

 

Next lets register the smi-s provider to the server using the Register-SmisProvider cmdlet, for more details on this cmdlet use

 

Get-Help Register-SmisProvider -Full

 

smis windows.png

 

Register-SmisProvider -ConnectionUri https://cloud2012dc.virtualcloud.com:5989

 

At this point if the registry process completes successfully, only the basic level information is collected (provider and array details) known as level 0.

 

Verify the storage provider's registered using Get-StorageProvider cmdlet.

 

Also verify the storage sub systems  show up correctly using the Get-StorageSubSystem cmdlet.

 

1.png

 

Now If we try to list the storage pool information we would get an error which states "The provider cache is empty".

 

1.png

 

To list the storage pool and volume information we need to initiate a much deeper level of discovery using the update-storageprovidercache cmdlet and set the discovery level to level 2. deeper level discoveries can take a lot of time, also each discovery level is cumulative, so if i run a level 3 discovery it would include both level 0 to level 2. Level 3 discovery is required to create a new storage pool from the list of free physical disks.

 

Get-StorageSubSystem | % {Update-StorageProviderCache -StorageSubSystem (Get-StorageSubSystem -FriendlyName $_.friendlyname) -DiscoveryLevel level2}

 

1.png

 

Wait for the updation process to complete and you would be able to see that storage pools gets successfully listed.

 

1.png

 

We can also search for your SMI-S provider servers in your environment using the Search-SmisProvider cmdlet.

 

The Search-SmisProvider cmdlet searches for Storage Management Initiative - Specification (SMI-S) providers available on the same subnet as your server as long as they are advertised through the Service Location Protocol v2 (SLPv2). The cmdlet returns a list of Uniform Resource Identifiers (URIs) that you can use with other cmdlets to register or remove a provider. If the cmdlet finds no SMI-S providers, it returns nothing.

 

1.png

Apart from search and register we can also unregister smisprovider using the unregister-smisprovider cmdlet, you can basically pipe the output of Get-StorageProvider to this cmdlet to unregister smisprovider.

 

1.png

 

I hope that you have enjoyed this blog entry and have found this information helpful. In my next series of blog post i would be taking a deep dive into the powershell cmdlets which come with the "Storage" module using which we would be configuring and managing NetApp storage using SMI-S. We would also cover the Server Manager based management in the later series.

 

Good Luck!

 

Thanks,

Vinith

When we create a CIFS File share on a NetApp cDOT 8.2.1 controller using SCVMM 2012 R2, there are no permissions which are assigned to the newly created CIFS / SMB 3.0 share, due to this we are unable to copy any files to the share as the newly created share lacks read/write access.

 

Let me illustrate this with an example, below you can see a CIFS share (fileshare_smbsharehyperv) which i created on my fileshare Hyper_V_PR_Quorum using SCVMM2012R2 and SMI-S integration.

 

Untitled.png

 

When i try accessing the share via a network path, the share is inaccessible and i receive an access denied error.

 

Untitled.png

 

Now lets examine the permissions of the share from ONTAP perspective using NetApp OnCommand System Manager, here you can see that there are no permissions assigned to the share and hence its rendered inaccessible.

 

Untitled.png

 

Now let me try assigning this share to My Hyper-V cluster for my Hyper-V over SMB environment. As you can see below the Job status fails an i receive an access denied error.

 

Untitled.png

Now lets again look at the share from the ONTAP perspective using NetApp OnCommand System Manager, here you can see that there are a new set of permissions assigned to the share which includes access rights to the owner account which initiated the JOB and both the computer accounts of the Hyper-V Cluster, the account which is not listed here is the VMM service account.

 

Untitled.png

 

So let me provide the VMM service account share access control, you can do this using the powershell tool kit cmdlets / via system manager, lets modify the access level to full control.

 

Untitled.png

 

Next lets restart the failed job in SCVMM, you would see that the JOB and its Sub-JOBS get successfully completed and now SCVMM has full access to the file share, similarly you can configure and  add a CIFS SMB 3.0 share created in SCVMM as a library share directly in SCVMM 2012 R2.

 

Untitled.png

 

Untitled.png

 

Note :- Make sure that the File share resolves both from IP and Name from all the Hosts which are part of the Hyper-V Cluster, DNS resolving issues add up to variety of issues faced with Hyper-V over SMB deployment.

 

Next try creating a VM from a vhdx file placed on a CIFS library share and enjoy the magic of ODX based provisioning.

 

Untitled.png

 

 

I hope that you have enjoyed this blog entry and have found this information helpful.

 

Good Luck!

 

Thanks,

Vinith

I hope you all enjoyed my series of blog posts on how to leverage SCVMM - SMI-S integration to effectively manage your Hyper-V Storage environment. In this blog post i would demonstrate how you can use a powershell script / function which leverages SCVMM powershell cmdlets to rapidly provision Virtual Machines using SCVMM SAN-COPY based VMM templates.

 

Prerequisites for this PowerShell script require that you set up SCVMM SAN-COPY based VMM templates.

 

For more details on how to set it up,  refer to "Data ONTAP SMI-S Agent 5.1 - Use NetApp SMI-S Provider for SAN-Copy based VM Rapid provisioning using SCVMM 2012 R2".

 

In the below figure, you can see the vhdx file used by the VMM template, which shows that the "SAN Copy capable" property is set to true.

 

1.png

 

The below figure shows the VMM template, which contains the above vhdx file.

 

1.png

 

Next, let me show you the powershell script which works out this magic

 

# Extract the VMM Template details which contains vhdx file which is SAN Copy Capable
$VMTemplate = Get-SCVMTemplate | ?{$_.name -match "VM-SAN-COPY-Template"}

# Extract the Hyper-V Host details which would be used to place the VM's
$VMHost = Get-SCVMHost -ComputerName "ms-w2k12-1.virtualcloud.com"

# Below we rapid provision 100 VM's with similar Virtual Machine Configuration, once the VM's get provisioned they would also get powered on.
1..100 | %  {


$virtualMachineConfiguration = New-SCVMConfiguration -VMTemplate $VMtemplate -Name "VM-SANCOPY-$_"
Write-Output $virtualMachineConfiguration
Set-SCVMConfiguration -VMConfiguration $virtualMachineConfiguration -VMHost $vmHost
Update-SCVMConfiguration -VMConfiguration $virtualMachineConfiguration
Set-SCVMConfiguration -VMConfiguration $virtualMachineConfiguration -VMLocation "E:\" -PinVMLocation $true
Update-SCVMConfiguration -VMConfiguration $virtualMachineConfiguration
New-SCVirtualMachine -Name "VM-SANCOPY-$_" -VMConfiguration $virtualMachineConfiguration 
Start-SCVirtualMachine -VM "VM-SANCOPY-$_"

}

 

My current setup consists of a 35GB vhdx file. As you can see in the below figure, the time taken to provision a single VM is about 16 seconds.

 

1.png

 

Once we get the 100 VM's provisioned and Powered on, lets check the time it took to provision these 100 VM's

 

1.png

 

As you can see below, I have wrapped a Measure-Command cmdlet around my script to measure the time take to provision 100 VM's. It can be seen that it took only 38 minutes and 31 seconds to complete.

 

1.png

 

Guys, so think of the possibilities, if i created these 100 VM's using BITS copy of the vhdx files, it would take about 1 hour per VM .

 

This technology can be leveraged for a large scale VDI deployment. You can also invoke a set of powershell scripts which carry out configuration activities when the VM boots up, for example to configure IP addresses from a SCVMM IP Address Pool.

 

I hope that you have enjoyed this blog entry and have found this information helpful.

 

Good Luck!

 

Thanks,

Vinith

Simply put, RDMA allows a device to see use the network pipe fully.

This is done in a few ways;

  • the first trick is RDMA uses is to allow the HBA to directly access memory, in this way to processor pin an address range and can give that address range
    to transfer to the remote side and let the card retrieve the memory directly. This has the effect of lowering CPU utilization greatly for large trasnfers. Once the transfer is complete the memory can be unpinned.
  • The second trick would be to let the card use custom silicon to handel the entire stack, and optimize that stack to these RDMA transfers. There are three layers of offload; TCPIP/Ethernet, Custom Headers over Ethernet, and Custom Headers over Infinniband.
  • The third trick would be to use use low latency and high bandwidth switches, which in their own right increase any traffic flow.

 

This concept and these methods are not new, consider the following quote;

"RDMA implements a Transport Protocol in the NIC hardware and supports Zero-Copy Networking, which makes it possible to read data directly
from the main memory of one computer and write that data directly to the main memory of another computer. RDMA has proven useful in apps..."

-This quote was written in 2005 (http://searchstorage.techtarget.com/definition/Remote-Direct-Memory-Access), and repeated every tech session since as if its a revelation.

 

There are 3 main varients of RDMA, these are;
  • iWARP, is the IETF RDMA Standard. Switch can be any high speed ethernet switch. This basically offloads the TCP stack directly onto the card and runs across a Ethernet Network.
  • RoCE, requires Converged Enhanced switches (CE) also called Data Center Bridging (DCB) switches so lossless/pause/priority flow control (PFC) are supported. This protocol runs a simpler stack that is NOT TCP based, but still runs on a Ethernet network, much like FCoE does.
  • Infiniband, required custom host adapters as well as custom switches. 

And the problem is that the miss-information prevails due to RDMA vendors that sell hardware have a vested interest in their own bet, and take shots at eachother publically;

One of the few voices out there that we would think is independant would be Microsoft, since they support all three, but
consider that Microsoft has a few different positions here as well. The developer side of Microsoft wants to support all three with the least common
denominator of features, in which case they write to iWarp. The Production and Operations side of the house however wants some of those advanced features like lossless and PFC, while the group that builds impressive demos and marketing material love to tout the maximum performance without regard to actual datacenter needs.

Lets let you see some realistic numbers. These are averages from CDW and NewEgg

TypeProtocolSpeedsSwitch RequirementsRoutableCost (NIC + Switch) per port
RDMAiWARP10gNo Requirements, but will work on DCBYes$900 (+$275 )
RDMARoCE10g-->40gDCBNo$1000 (+$500)
RDMAInfiniband12g-->54gInfinibandNo$1500 (+$300)
10g E10gNo Requirements, but will work on DCBYes$400 (+$275)
FCoE10gDCBYes$500 (+$500)
FC8g-->16gFibre ChannelYes$800-->$1600 (+$500)

You probably also have a few requirements when it comes to your servers, the first of which is reliability. You likely require that you have multiple connections
from the Server to your production network, multiple connections to your storage, and multiple connections to your peer servers.

  • Sample Demo Rig (high performance, no cost constrants, no consideration for simplicity/reliability/scalability)
    • 2 x 10g for Production network, 2 x 54g IB for peer, and 2 x 16g FC for storage but this is a lot of wires and networks to manage.
  • Highly scalable/deployable setup (good performance, flexibility if needs change, less to manage, software defined QoS to support needs)
    • 2 x 10g FCoE with QOS for production network and Storage access, 1 x 10g RoCE for peer servers and since they can all live on DCB
      type switches there is only one set of networks to manage. I would only need a single RoCE adapter since SMB would shed traffic to the non-RDMA adapater if a failure happened on the RDMA network.

You may find the difference in performance between the two above sample designs can favor the simpler design by simply using the next faster step of processor, or slightly more system memory. People generally underestimate what a well designed 10g Ethernet on DCB switching can do.

Do you want to deploy Converged Ethernet switching knowing that it will support both iWARP or RoCE, or purchase non-CE ethernet switches as well as Infiniband switches to support. Are you currently depoying iSCSI using software based initiators, you may find a significant CPU reduction by moving to a FCoE type connection since all of that work is then moved off to the FCoE card.

 

I really want to hear your experiences with RDMA, have you had good/neutral/bad expereiences with them in a production enviornment.

I hope you all enjoyed my blog on how to "Use NetApp SMI-S provider for Rapid provisioning VM's using SAN Copy". In this blog post , i would demonstrate how can use NetApp SMI-S Provider for Migrating virtual machine storage using SAN copy.

 

For VMs, which are running on a cluster SCVMM will use live migration to move VMs across cluster, SAN migration is used to Move VMs across different clusters or between standalone hosts.

Let us see an example that shows how we can migrate VMs across standalone hosts using VM migration and SAN migration process.

 

Open the SCVMM console and select the VM placed on standalone host, right-click it and select Migrate Virtual Machine.

 

Untitled.png

Select the appropriate standalone host and click Next, you would be able to see the Transfer Type as SAN and click Next.

 

Untitled.png

Click Browse and select the desired destination folder, click OK, and then click Next.

 

You would be able to see that the destination drive would have (SAN Migration Capable) in the description page.

 

Untitled.png

Review the summary page and click Move to migrate the VM, Monitor the status of the job, and see that it completes successfully, you would be able to see that in the deploy file named job status the transfer type would be listed as using SAN transfer.

 

Untitled.png

 

I hope that you have enjoyed this blog entry and have found this information helpful.

 

Good Luck!

 

Thanks,

Vinith

I hope you all enjoyed my blog on how to "Provision and configure a CIFS SMB 3.0 share using SMI-S - SCVMM 2012 R2 integration"

 

In this blog post , i would demonstrate  how can use NetApp SMI-S Provider for rapid Provisioning of Virtual Machines using SAN copy.

 

SCVMM 2012 administrators now have the ability to deploy VMs using rapid provisioning using the NetApp SMI-S based SAN transfers.

 

Rapid provisioning of VMs requires that you decide if you want to use the default Snapshot copy or clone method built into SCVMM, or if you want to issue commands through Windows PowerShell for more granular control. In both the methods, the SCVMM console can be used to expose those LUNs to the Hyper- V cluster or standalone Hyper-V server.

 

Two methods to quickly provision a LUN with a VM for SCVMM 2012 are the use of Snapshot copies and the use of clones. Traditionally, the terms Snapshot technology and clone are not similar to the Snapshot technology and FlexClone technology that NetApp uses. Snapshot copies can be created on most storage controllers almost instantaneously; Snapshot copies use virtually no additional hard drive space beyond the deltas.

 

SCVMM 2012 SP1 does not support ODX hence we need to leverage SAN copy based template for VM rapid provisioning, When we provision VM's using SAN copy based templates its limited to one VM per LUN, but now with ODX support which is integrated in SCVMM 2012 R2 we can provision multiple VM's per LUN.

 

For more details on how to leverage ODX in SCVMM 2012 R2 for VM provisioning refer to my blogpost "Leverage SCVMM 2012 R2 Offload Data Transfer (ODX) for VM Provisioning and Migration Using Data ONTAP SMI-S Agent 5.1".

 

Administrators have the option to create SAN copy-capable templates from new or existing VMs. The VM templates are stored in the SCVMM 2012 library and to simplify administration. For implementing SMI-S based rapid provisioning NetApp Snapshot copies are used rather than clones.

 

So lets with configuring our Storage Array for rapid provisioning.

 

In the SCVMM console, navigate to Fabric > Arrays, right-click the one of the Storage Arrays and select its Properties.

 

Untitled.png

 

Click Settings, select the desired SAN transfer type as Use snapshots and click OK.

 

Untitled.png

Once we are done with these steps, lets start with the creation of SAN copy capable templates for VM Rapid provisioning using SCVMM.

 

Create SAN Copy Capable Templates for VM Rapid Provisioning Using SCVMM


Create two logical units 50GB each using SCVMM console, before you attempt this step make sure that the iSCSI initiator is logged onto targets.

 

We would be considering two scenarios for this, highly available VMs and VMs placed on standalone host, for this we will create two LUNs.

 

Navigate to Fabric > Arrays > click Create Logical Unit. Enter the Name as HA-LUN.

 

Untitled.png

 

Repeat the above step  for creating a 50GB LUN named NON-HA-LUN.

 

Untitled.png

 

After the LUNs get created they would show up under Classifications and Pools.

 

Untitled.png

 

Next we to allocate these LUNs to the All Hosts group, Navigate to Fabric > Servers > right-click All Hosts, select Properties and click Storage.

 

Untitled.png

 

Click Allocate Logical Units and allocate the Available Logical Units to Allocated Logical Units, click OK.

 

Untitled.png

Next select your Library host from Fabric > Servers. Right-click it, and select Properties, and then click Storage.

 

Untitled.png

Click Add - Add Disk, select the new LUN (HA-LUN) from the drop-down list under Logical unit, select to format as NTFS, give it a volume label as HA-Template and choose to mount in the following empty ntfs folder.

 

Untitled.png

Select the HA Template folder under your library share. (click the explore link if you need to create an empty directory) Click OK.

 

Untitled.png

 

Next, head over to your library servers D:\ drive  and confirm that the mount point gets listed in the HATemplate directory of your library server.

 

Similarly perform these steps to create a NON-HA-Template and verify that the SA Template mount point also gets listed.

 

Untitled.png

 

Next copy the VHD (In my case it is a Win2k12 VHDX file), which would be used for SAN provisioning to both the folders.

 

Next head over to your SCVMM console. Navigate to the Library > Library Servers and right-click the library share "VMM LibraryShare", select Refresh. After the library share refresh is completed, go back to the library share and navigate down to the folder that we created and mounted the LUN in, select the VHD you would be able to see that SAN Copy Capable is set to Yes.

 

Untitled.png

Next create two VM templates based on both the VHDs one for High Availability and another for Standalone host and provision VMs.

 

Next create virtual machines from the template and during the select destination host screen you would be able to see the transfer type as SAN.

 

Untitled.png

 

what happens underneath is a LUN clone, as you can see in the figure below I had my base LUN of 50GB HA-LUN into which I copied the WIN2K12 vhdx file, but when i provision a VM using SAN-COPY, it effectively created a LUN Clone of that VM which are writable, space-efficient clones of parent LUNs, We get immediate read-write access to both the parent and cloned LUN

 

Untitled11111.png

 

I hope that you have enjoyed this blog entry and have found this information helpful.

 

In my next blog i would show you steps on how you can use SMI-S SAN-COPY technique for VM SAN Migration.

 

Good Luck!

 

Thanks,

Vinith

Copy offload provides a mechanism to perform full-file or subfile copies between two directories residing on remote servers, where the server can be the same or different. Here, the copy is created by copying data between the servers (or the same server if both source and destination files are on the same server) without the client reading the data from the source and writing to the destination.

 

This reduces the client/server processor/memory utilization and minimizes network I/O bandwidth. With Windows Server 2012, Microsoft introduces a copy offload mechanism, which allows you to offload the activity of copying files between two servers to the storage system.

 

Data ONTAP SMI-S 5.1 supports ODX with the System Center 2012 R2, Windows Server 2012 and clustered Data ONTAP 8.2.1.

 

In the below scenario, we deploy VMs from a virtual hard disks (VHD and VHDX) placed on an SMB share to a Hyper-V over SMB infrastructure. Source and destination shares must be on the same volume for faster ODX performance (uses sis-clone engine). You create template on source share and provision VMs on the destination share. Source (Library) and destination (LUN/Share) should be in the same pool (FlexVol volumes in our case) for better efficiency.

 

Untitled.png

 

On the SCVMM console, create two file shares, attach one of the file share to the SCVMM library server and the other to the Hyper-V host, for more details on how to create CIFS file share refer to one of my previous blog posts Data ONTAP SMI-S Agent 5.1 - Use NetApp SMI-S Provider to Provision a CIFS Share for SMB 3.0 Environments (Windows Server 2012) using SCVMM 2012 R2

 

Untitled.png

 

Right-click the Library server and add one of the cifsshare listed in the preceding section as a library share, copy the desired VHD to the library share and click Refresh, the VHDX file appears.

 

Untitled.png

Next In the SCVMM console, click VMs and Services, and then right-click the cluster name and select its Properties.

 

Untitled.png

Click File Share Storage and add the desired CIFS share to be used as a file share storage.

 

Untitled.png

 

Next provision VMs from the VHD residing on the CIFS library share to the Hyper-V clusters file share storage. During the provisioning process, the deploy file section gets highlighted (Using Fast File Copy), which means ODX-based copy is being used.

 

Fast file copy in System Center 2012 R2 Virtual Machine Manager greatly improves the time performance of file transfers and virtual machine deployments, mostly by leveraging the Windows ODX feature.

 

Untitled.png

Similarly we can migrate VM storage across CIFS SMB 3.0 shares and CSV's hosted on Clustered Data ONTAP 8.2.1 and leverage ODX for fast copy process. Also note that SCVMM 2012 R2 currently does not have the capability to show that ODX is being used in VMM Job details, it appears as a normal migration process.

 

As you can see i initiated a live migrate of VM's between SMB 3.0 shares and the migration type does not list the migration type as “fast file copy”, although it does show this during VM provisioning.

 

Untitled.png

 

 

 

I hope that you have enjoyed this blog entry and have found this information helpful.

 

Good Luck!

 

Thanks,

Vinith

 


I hope you all enjoyed my tutorial on how to "Provision a CIFS Share for SMB 3.0 Environments (Windows Server 2012) using SCVMM 2012 R2" .

 

In this blog post , i would demonstrate  how you can leverage NetApp SMI-S Provider for SCVMM 2012 R2 based Storage Live Migration using ODX based copy.

 

Copy offload provides a mechanism to perform full-file or subfile copies between two directories residing on remote servers, where the server can be the same or different. Here, the copy is created by copying data between the servers (or the same server if both source and destination files are on the same server) without the client reading the data from the source and writing to the destination.

 

This reduces the client/server processor/memory utilization and minimizes network I/O bandwidth. With Windows Server 2012, Microsoft introduces a copy offload mechanism, which allows you to offload the activity of copying files between two servers to the storage system.

 

Data ONTAP SMI-S 5.1 supports ODX with the System Center 2012 R2, Windows Server 2012 and clustered Data ONTAP 8.2.1.

 

We would start by creating two cluster shared volumes (csvlun1, csvlun2) on a storage pool / volume residing on Clustered Data ONTAP 8.2.1 using SCVMM 2012 R2 console and map it to the Hyper-V cluster, for more details on how to create CSV's and map them to Hyper-V hosts please refer to my previous blog on how "Use NetApp SMI-S Provider for Hyper-V Hosts Storage Management using SCVMM 2012 R2 (Create and Map LUN's to Hyper-V hosts in SCVMM)"

 

1.png

 

Once these LUN's get mapped to the Hyper-V cluster, they would show up under disks in Failover Cluster Manager.

 

1.png

 

Next create a highly available VM using SCVMM 2012 R2 console and place it on csvlun1 (C:\ClusterStorage\Volume3). Create the VM using a HA template or vhdx placed in the VMM library.

 

1.png

 

Once the VM get created we will proceed with its migration, we would "Storage Live Migrate" the VM from the current csvlun1 (C:\ClusterStorage\Volume3) to csvlun2 (C:\ClusterStorage\Volume4).

 

On the SCVMM console, right click select the VM and click on "Migrate Storage", select the storage location for VM configuration as "C:\ClusterStorage\Volume4", click on next and move.

 

1.png

 

..continued..

 

1.png 1.png

 

Wait for the job to complete and verify that the storage migration got successfully completed. SCVMM offload's the live and live storage migration job to the platform(in this case HyperV) and it uses ODX transfers if the underlying storage supports it.

 

SCVMM 2012 R2 does not have the capability yet to show that in VMM jobs’ details.

 

1.png

 

Once the Migration completes verify that the storage migration got successfully completed and the new storage location for the vhdx files shows up as csvlun2 (C:\ClusterStorage\Volume4).

 

1.png

 

Similarly you can execute storage live migrate VM's across SMB shares and utilize ODX benefits.

 

I hope that you have enjoyed this blog entry  and have found this information helpful.

 

In my next blog i would show you steps on how you can use SMI-S for rapid provisioning of vm's using Library server based SAN-COPY using SCVMM2012R2.

 

Good Luck!

 

Thanks,

Vinith

I hope you all enjoyed part 6 of my micro blog tutorial on how to "Create a LUN on the NetApp Storage and Allocate it from SCVMM 2012 R2" . 

 

In this blog post , i would demonstrate  how can use NetApp SMI-S Provider to Provision a CIFS Share for SMB 3.0 Environments (Windows Server 2012) using SCVMM 2012 R2 also i would mention the steps to increase the CIFS share size as currently SCVMM 2012 R2 GUI console does not support this..

 

Clustered Data ONTAP 8.2 / 8.2.1 supports the Hyper-V over SMB 3.0 feature in Windows Server 2012. Users can now create virtual machines and host them on CIFS shares that are continuously available on the NetApp storage systems. Note that a CIFS Storage Virtual Machine (SVM) (formerly Vserver) must be first created on the NetApp clustered Data ONTAP storage system. The CIFS SVM is then listed as a file server and is available for file share creation.

 

Note:   If SCVMM discovery fails, or if SCVMM cannot create shares or clones or if CIFS server does not show up in the SCVMM even if it is configured in SVM make sure to check the license status, make sure the licenses have not expired.

 

CIFS Shares created by SDW are not compatible with SMI-S. SDW creates shares on Volumes. SMIS create/manage shares created on qtree with quota assigned to it. The ACL "everyone" should be removed for such shares to avoid security issues. The access status of shares may not turn green for x minutes after register operation. CIFS server takes a while (few seconds) to apply those ACLs. SCVMM is quick enough to verify if share can be accessed on the host where it is registered and find that the share cannot be accessed on the host, wait 5 minutes and try again. On next verification step, SCVMM discovers that share can be accessed on the host and refreshes the status of the registered shares.

 

To complete creation of a file share using SCVMM 2012 R2 us the below steps.

 

In the SCVMM 2012 R2 console, navigate to Fabric > Storage > File server, right-click File Server, and then click Create File Share.

 

Untitled.png

 

Enter a file share name and its size and click Add.

 

Untitled.png

The file share gets successfully created.

Untitled.png

 

Note: All SMB 3.0 workflows are supported only on clustered Data ONTAP 8.2 and SCVMM 2012 R2.

 

Note: SMI-S Agent does not expose root volumes, file shares in root volumes, or shares that are mounted to an entire volume.

 

To create a file share and use it in Windows, NetApp recommends creating the file share in an NTFS-only volume.This is to avoid problems with the credentials that access the file share.

 

To create the volume with NTFS, enter the following command:

 

vol create -vserver <vserver_name> -volume <volume_name> -aggregate <aggr_name> -size<volume_size> -security-style ntfs

 

Note: SCVMM cannot discover storage objects if domain user is used for communication between SMIS and itself; hence, for this communication a CIM server user, which has local administrative rights on the SMI-S Provider should be used. We should always add local user to SMI-S Provider and use that user to communicate with SCVMM.

 

Note: CIFS Shares created by SnapDrive 7.0 for Windows are not compatible with SMIS. SMI-S Provider has the support to modify the share size, this cannot be done using SCVMM, we can accomplish this using Data ONTAP PowerShell Toolkit or through Data ONTAP CLI. So, assume that we have two shares listed in SCVMM as shown in the following screenshot.

 

Untitled.png

Let us assume that when we need to increase the share size of cifsshare111 from 40GB to 50GB: First let us retrieve the quota details.

 

Untitled.png

 

Update the quota with whatever size you want for the share and then wait for 5 minutes. Let us increase it to 50GB.

 

Untitled.png

SMI-S Provider will update its cache with any out of band changes on the filer/SVM every 5 minutes.

 

Then, do a rescan at SCVMM for the new size to show up.

 

Untitled.png

You can perform the similar steps using Data ONTAP CLI. Assume that when we want to increase a share size from 10MB to 1GB.

 

On the SVM console, enter the following:

 

quota policy rule modify -volume mount_test -target testshare -disk-limit 1024MB -vserver share_name -policy-name default -type tree –qtree

 

Untitled.png

Wait 5 minutes for the SMI-S Agent cache to refresh. To view the change, re-scan on SCVMM.

 

I hope that you have enjoyed this blog entry  and have found this information helpful.

 

In my next blog i would show you steps on how you can use SMI-S for rapid provisioning of vm's using SCVMM2012R2.

 

Good Luck!

 

Thanks,

Vinith

I hope you all enjoyed part 5 of my micro blog tutorial on how to "Create and Map LUN's on Hyper-V Hosts using SCVMM" .  In part 6 , i would demonstrate the steps to Create a LUN on the NetApp Storage and Allocate it from SCVMM 2012 R2.

 

SCVMM in integration with SMI-S has the ability to allocate and connect the LUN, which is created, on NetApp storage to the Hyper-V servers. For the below scenario i would be using System Manager for my storage provisioning activities, you can also use "PowerShell Toolkit" to create LUN's.

 

Launch the System Manager, connect to your desired storage server and create a new LUN (Thick or Thin Provisioned), select an existing volume, do not map it into any of the targets.

 

Untitled.png

(continued)

 

Untitled.png

(continued)

 

Untitled.png

 

Log in into SCVMM 2012 R2 console, navigate to Fabric > Servers, right-click a host group, select Properties. On the Storage page, click the NetApp SMI-S Provider, and then right-click rescan the SMI-S Provider, this step would result the SMI-S agent to rescan the storage and detect any changes.

 

Untitled.png

 

Navigate to Fabric, on a host group, select a Hyper-V host, right-click Properties, click Storage, and then click Allocate the Logical Units.

 

Untitled.png

 

As you can see the LUN created via System Manager gets listed below, select the LUN and click Add.

 

Untitled.png

 

Hence we can see that the LUN created manually on the NetApp storage can be used by the Hyper-V hosts’ member of the host group.

 

I hope that you have enjoyed this blog entry  and have found this information helpful.

 

In my next blog on Hyper-V Host storage management i would be talking about how we can  "Provisioning a CIFS Share for SMB 3.0 Environments (Windows Server 2012) using SMI-S - SCVMM integration "

 

Good Luck !

 

Thanks,

Vinith

snagesh

UNIX User Auditing with SLAG

Posted by snagesh Feb 17, 2014

Recently we came across a customer who had to audit user access to NFS exports for compliancy purpose. He was using Data ONTAP 7-Mode and was an exclusive UNIX shop. Since it was for compliancy purpose there was audit and legal team pushing for its implementation. The ask was for simple file access auditing with minimal additional expenditure on license and hardware/system configuration. The account team reached out to us and thus started an investigation into our solution space.

 

To start with we had two parallel technologies to rely upon

 

  • FPolicy partner products that provided an extensive reporting and data Governance capabilities
  • Native auditing based solution that logs audit trails in windows EVT format
    • EVT files can be viewed through Event Viewer or
    • Converted into plain text XML file with NetApp Tool and viewed through any text editor

 

Under the circumstances we considered native auditing solution as there was no additional expenditure on partner license or hardware configuration. 

 

Native Auditing for UNIX User Auditing

 

Native auditing is an implementation on top of Microsoft NTFS (New Technology File System) hence requires configuring filter file on a NTFS volume. The filter-file acts as a place holder for ACEs (Access Control Entries). But using Filter-File has two limitations

  • They have controller scope and same set of SACLs (System Access Control List) will be applied to every object in the controller
  • Since the Filter File has to be NTFS object you need to have a NTFS volume to host it even when you are a pure UNIX shop

When you don't want to enable auditing at controller level and don't want to depend on NTFS volumes than using SLAG (Storage Level Access Guard) makes ideal sense. Details about the SLAG is covered under TR-3596. SLAG supports configuring NTFS ACEs on even UNIX volumes and Qtrees. This provides more granularity than auditing based on Filter File. Additionally we don't need a NTFS based volume just to configure filter file

 

Let us see how SLAG can be configured for auditing

 

Configuring SLAG for UNIX User Auditing

 

Configuring controller for NFS auditing is covered in TR-3595.

 

Enable NFS auditing with option.

 

options cifs.audit.nfs.enable on

 

NFS file access auditing can be enabled through

options cifs.audit.file_access_events.enable on

 

Once the controller is configured is configured for NFS auditing you need to set SLAG on UNIX Volume or Qtree.Before applying SLAG verify the current SDDL (Security Descriptor Definition Language) configured on Volume or Qtree

 

Checking existing Permission on UNIX Volume

 

snagesh-vsim-3> fsecurity show /vol/vol2
[/vol/vol2 - Directory (inum 64)]
 Security style: Unix
 Effective style: Unix
 DOS attributes: 0x0010 (----D---)
 Unix security:
 uid: 0 (root)
 gid: 0
 mode: 0755 (rwxr-xr-x)

Enabling SLAG is done though the ONTAP command Fsecurity. Fsecurity command requires SDDL  string in a file as input. SDDL can be written manually or through the tool Secedit.exe. It can be edited once created from the tool.

 

Creating SDDL from Secedit.exe

 

When adding SDDL selection Storage option to configure SLAG

SecEditExe1.png

Configured SACL as required by your organization needs

SecEditExe2.png

After creating SDDL I have edited to set SACL to everyone and for every operations. This will the content of the file

cb56f6f4
1,1,"/vol/vol2",0,"S:P(AU;CIOISA;0xf01ff;;;Everyone)D:P"

 

I can copy that to a file in ONTAP and apply the SDDL using Fsecurity apply command. After applying the SLAG this is what Fsecurity will show

 

snagesh-vsim-3*> fsecurity show /vol/vol2
[/vol/vol2 - Directory (inum 64)]
  Security style: Unix
  Effective style: Unix
 
  DOS attributes: 0x0010 (----D---)
 
  Unix security:
    uid: 0 (root)
    gid: 0
    mode: 0755 (rwxr-xr-x)
 
  Storage-Level Access Guard security:
    DACL (Applies to Directories):
      No entries.
    SACL (Applies to Directories):
      Success - Everyone - 0x000f01ff (Full Control)
    DACL (Applies to Files):
      No entries.
    SACL (Applies to Files):
      Success - Everyone - 0x000f01ff (Full Control)

 

Consideration while deploying auditing solution using SLAG


UNIX User Mapping

SLAG supports only NTFS style security permissions.  This mandates mapping UNIX users to Windows users. Let us understand this with an example

Two UNIX users Linux\user1 and Linux\user2 are mapped to same CIFS user Windows\user3. Say a file gets deleted by any one of the UNIX users; we can’t conclusively say who has deleted the file looking at log. The log will say Windows\user3 has deleted the file 

To alleviate the problem diligent mapping of UNIX users to Windows Users is required.

 

User access Denials 

Some UNIX user access may get denied after enabled SLAG. This will happen even though we haven’t configured DACL (Discretionary Access Control List). This will be so because of missing mapping information for those UNIX users. SLAGS consider those users as CIFS users with missing credentials and fail to evaluate; leading to denial of file access requests. This requires ensuring mapping exist for all UNIX users.

 

Clustered Data ONTAP Implementation

In Clustered ONTAP UNIX user auditing is based on NFS V4.x ACL and has not dependency on NTFS ACLs. Thus similar customer scenario can be supported by setting appropriate NFS V4.x ACLs

I hope you all enjoyed part 4 of my micro blog tutorial on how to "Allocate Storage pools to Hype-V host groups" .  In part 5 , i would demonstrate the steps to create a LUN for a "Standalone Hyper-V Host", "Hyper-V Cluster" and how you can "Use a Cluster Disk as a Cluster Shared Volume for High Availability".

.

To use the storage exposed via SMI-S in SCVMM, you must allocate storage pools to the Hyper-V host groups. When this procedure is performed the Hyper-V host members of that host group will be suitable for LUN provisioning and LUN creation, only for the selected storage pools.

 

Create a LUN for a Standalone Hyper-V Host

 

Before getting started, make sure that at least one storage pool has been allocated to the host group, for more details on how to allocate storage pools to Hyper-V host groups refer to

part 4 of my micro blog.

.

On the SCVMM 2012 console, navigate to Fabric; under servers detect the desired standalone Hyper-V host.

 

Right-click the host and select Properties, on the Storage page, click Add and "Add Disk".


Untitled.png

Click Create Logical Unit, the LUN creation wizard pops up.

 

Select the Storage pool where you want to create the LUN. Only storage pools that have been allocated to the host group will be visible. Enter a Name for the LUN. Select the Size of the LUN. Select

either Create thin storage logical unit with capacity committed on demand or Create a fixed size storage logical unit with capacity fully committed, click OK.

 

The LUN will be created, and the option will display the LUN information. Select the partition type, volume name, format option, and drive letter that will be used on the Hyper-V host. (Drive E in this example).

 

Untitled.png

The new disk has been created; the LUN can be seen in the Fabric, Storage, Classification and Pools:

 

Untitled.png

Log in remotely to the Hyper-V host and verify that the disk has been created and is available.

 

Untitled.png

Next i would demonstrate the steps to create a LUN for a Hyper-V cluster.

 

 

Create a LUN for the Hyper-V Cluster

 

In the SCVMM console click Fabric > Servers select the cluster where you want to create a new LUN. Right-click the cluster and select Properties and click Available Storage and click Add.

 

Untitled.png

Click Create Logical Unit.

 

Untitled.png

Select Storage Pool, enter a Name, select the Size of the LUN, select either Create thin storage logical unit with capacity committed on demand or Create a fixed size storage logical unit with capacity fully committed and click OK.

 

After the LUN gets created for the cluster, a new volume is available under Available Storage in the cluster.

 

After the SCVMM job is completed, the Hyper-V hosts’ member of the cluster will have a new Cluster Shared Volume that can be used to store the VM. The result can be seen in the cluster Property, in the Shared Storage section:

 

Untitled.png

 

Next lets use this available storage and convert it to a CSV which could be used later to place highly available VM's

 

Use a Cluster Disk as a Cluster Shared Volume for High Availability

 

Using a cluster disk as a CSV is fairly simple, select the highlighted disk and click Convert to CSV.

 

Untitled.png

Next head over to the "Shared Volumes" tab and verify that the disk shows up as a CSV, you can also verify this by logging into a clustered Hyper-V host -> Failover Cluster Manager -> Shared disks.

 

I hope that you have enjoyed this blog entry  and have found this information helpful.

 

In my next micro-blog on Hyper-V Host storage management i would be talking about how we can  "Create a LUN on the NetApp Storage and Allocate It from SCVMM 2012"

 

Good Luck !

 

Thanks,

Vinith

I hope you all enjoyed part 3 of my micro blog tutorial on how to connect the Hyper-V hosts to NetApp Storage.  In part 4 , i would demonstrate the steps to allocate storage pools to hyper-v host groups using SCVMM .

 

To use the storage exposed via SMI-S in SCVMM, you must allocate storage pools to the Hyper-V host groups. When this procedure is performed the Hyper-V host members of that host group will be suitable for LUN provisioning and LUN creation, only for the selected storage pools.

 

Under Fabric, select the storage host group on which you want to allocate the storage pool, in my current scenario its the "All Hosts" group.

 

Right-click > select Properties.

 

Untitled.png

 

On the Storage page, click Storage, and then click Allocate Storage Pools.

 

Untitled.png

Click Available storage pool and click Add. This will allocate the storage pool to the All Hosts host group, so that the Hyper-V hosts can use it, click ok.

 

Untitled.png

The storage pool is now allocated to the host group, SCVMM will now be able to create and allocate a LUN in this storage pool for the host member of the host group.

 

Untitled.png

 

I hope that you have enjoyed this blog entry  and have found this information helpful.

 

In my next micro-blog on Hyper-V Host storage management i would be talking about the steps which need to be followed to create a LUN for a "Standalone Hyper-V Host", "Hyper-V Cluster" and how you can "Use a Cluster Disk as a Cluster Shared Volume for High Availability".

 

Good Luck !

 

Thanks,

Vinith

We will look at how to deploy Virtual Machine faster using Copy Offload(ODX) by Netapp Storage.

 

With SCVMM 2012 R2, we will now be able to take advantage of copy offload, while deploying a virtual machine from SCVMM library. Also known as Fast File Copy.

 

LAB SETUP

=========

Netapp Storage controller running Clustered Data ONTAP 8.2.1, with copy offload enabled.

2 Node Microsoft Hyper-v Cluster 2012 R2

System centre virtual machine manager 2012 R2

Active directory Domain Controller for authentication.

 

Configuration of the Netapp Storage Controller

===================================================

We have 2 LUNs configured here, they are Part of the Vserver LAB_Vserver

  1. lun_Cluster_ODX is presented to the hyper-v Cluster and its used as Cluster Shared Volume.
  2. Lun_VMMLib is presented to the SCVMM server and it is hosting the VMM Library.

1.bmp

 

Now let's Make sure we have ODX Enabled for this Vserver "LAB_Vserver".

We will login to Cluster CLI and set the diagnostic mode. Once we are in the Diag mode, use Command "copy-offload show -vserver LAB_Vserver". This will show the status of Copy-Offload.

 

-----------------------------------------------------------------------------------------------------------

 

cluster::*> copy-offload show -vserver LAB_Vserver

 

                     Vserver Name: LAB_Vserver

                 NFS Copy-Offload: -

                SCSI Copy-Offload: enabled

                     QoS Throttle: 0B

Copy-Offload Via Sub-File Cloning: enabled

 

-----------------------------------------------------------------------------------------------------------

Now let's look at the SCVMM and Hyper-v Cluster.

 

So we have a hyper-v cluster with 2 Nodes.

 

2.bmp

 

In VMM Library we have a "ODX Template" which will be used for demo.

 

This template has Windows server 2012R2 Datacentre edition.

 

3.bmp

 

We will create a VM from this template, named "ODX-test".

 

4.bmp

 

Now you must use the default settings that you have specified in your template, only thing that you have to make sure is the virtual machine destination is on the same as LUN that we have presented from the Netapp Storage above.

 

5.bmp

 

 

In this LAB setup "C:\ClusterStorgae\Volume2" is on the "lun_Cluster_ODX" as mentioned in the storage layout above.

 

It took 39 seconds to deploy a VM, which was Windows server 2012R2 Datacentre full blown edition.

 

6.bmp

 

It will take few more minutes for customization once this VM boots up before you can use it.

 

==================================================================================================

Dhaval Bhadeshiya

Reference architect - Microsoft Solutions

Netapp

2.bmp

I hope you all enjoyed part 1 and part 2 of my tutorial on how to configure Data ONTAP SMI-S Agent 5.1 - SCVMM integration with VMM Powershell cmdlets and GUI.

 

In part 3 of my blog, i would demonstrate the steps which will help you to use SMI-S for your day to day Hyper-V hosts storage management activities, i would be continuing this series with a set of 6 micro blogs which covers all the scenarios on using SMI-S for Hyper-V storage management.

 

So lets start with the first one.

 

Connecting the Hyper-V hosts to the NetApp Storage using SMI-S

 

Now that the NetApp Storage has been added to the SMI-S provider, it can be directly managed from SCVMM 2012. The next step is to give access to the Hyper-V hosts to the storage array by creating a iSCSI Session on each Hyper-V host.

 

Note :- For more details on how to configure Data ONTAP SMI-S Agent 5.1 please refer to part 1 and part 2 of my tutorial

 

- Open SCVMM console, in the Fabric

- Select the first Hyper-V host in Servers > All Hosts

 

Right click on the host and select properties, In Storage, click “Add” then select “Add iSCSI Array”

 

1.png

The List of Storage added in the SMI-S Provider will be displayed, select the desired storage and click “Create”

 

1.png

 

Next you would see that  the iSCSI array shows up , the Hyper-V host has now access to the NAS, and the Storage allocation and provisioning can be done through SCVMM.

 

1.png

 

Repeat this setting for each Hyper-V Host.

 

For more details about the iSCSI Session creation, you can refer to Microsoft’s documentation: How to Configure Storage on a Hyper-V Host in VMM: http://technet.microsoft.com/en-us/library/gg610696

 

I hope that you have enjoyed this blog entry  and have found this information helpful.

 

In my next micro-blog on Hyper-V Host storage management i would be talking about the steps which need to be followed to "Allocate Storage Pool to Hyper-V Host Groups".

 

Good Luck !

 

Thanks,

Vinith


Filter Blog

By author:
By date:
By tag: