NetApp for Microsoft Environments in NetApp BlogsWhere is place located?

Log in to ask questions, participate and connect in the NetApp community. Not a member? Join Now!

Recent Blog Posts

Refresh this widget

Data ONTAP SMI-S Agent 5.1 can be used to provision and manage storage with Windows Server 2012 and above . To enable this feature in Windows Server 2012 R2 the "Windows Standards Based Storage Management" feature needs to be added from the server manager. Storage provisioning and management can be done using server manager and also powershell cmdlets.

 

In this blog post i would show you the steps you can follow to configure Data ONTAP SMI-S agent 5.1 with Windows server 2012 R2 to effectively manage your windows server storage environment.

 

Note :- Prerequisites require the installation and configuration of Data ONTAP SMI-S agent 5.1 on another windows server.

 

Ok so lets start by installing the windows feature "Windows Standards Based Storage Management", this can be done via powershell or using the server manager.

 

Launch the "Add roles and features wizard" and select the "Windows Standards Based Storage Management" feature and confirm the installation, wait for the installation to complete.

 

1.png

 

You can also install the feature using powershell using the below cmdlet.

 

Add-WindowsFeature WindowsStorageManagementService

 

Once the feature installation is completed we also get a new module "smisconfig" in powershell which can be used to search, register and unregister the smi-s provider.

 

1.png

 

Next lets register the smi-s provider to the server using the Register-SmisProvider cmdlet, for more details on this cmdlet use

 

Get-Help Register-SmisProvider -Full

 

smis windows.png

 

Register-SmisProvider -ConnectionUri https://cloud2012dc.virtualcloud.com:5989

 

At this point if the registry process completes successfully, only the basic level information is collected (provider and array details) known as level 0.

 

Verify the storage provider's registered using Get-StorageProvider cmdlet.

 

Also verify the storage sub systems  show up correctly using the Get-StorageSubSystem cmdlet.

 

1.png

 

Now If we try to list the storage pool information we would get an error which states "The provider cache is empty".

 

1.png

 

To list the storage pool and volume information we need to initiate a much deeper level of discovery using the update-storageprovidercache cmdlet and set the discovery level to level 2. deeper level discoveries can take a lot of time, also each discovery level is cumulative, so if i run a level 3 discovery it would include both level 0 to level 2. Level 3 discovery is required to create a new storage pool from the list of free physical disks.

 

Get-StorageSubSystem | % {Update-StorageProviderCache -StorageSubSystem (Get-StorageSubSystem -FriendlyName $_.friendlyname) -DiscoveryLevel level2}

 

1.png

 

Wait for the updation process to complete and you would be able to see that storage pools gets successfully listed.

 

1.png

 

We can also search for your SMI-S provider servers in your environment using the Search-SmisProvider cmdlet.

 

The Search-SmisProvider cmdlet searches for Storage Management Initiative - Specification (SMI-S) providers available on the same subnet as your server as long as they are advertised through the Service Location Protocol v2 (SLPv2). The cmdlet returns a list of Uniform Resource Identifiers (URIs) that you can use with other cmdlets to register or remove a provider. If the cmdlet finds no SMI-S providers, it returns nothing.

 

1.png

Apart from search and register we can also unregister smisprovider using the unregister-smisprovider cmdlet, you can basically pipe the output of Get-StorageProvider to this cmdlet to unregister smisprovider.

 

1.png

 

I hope that you have enjoyed this blog entry and have found this information helpful. In my next series of blog post i would be taking a deep dive into the powershell cmdlets which come with the "Storage" module using which we would be configuring and managing NetApp storage using SMI-S. We would also cover the Server Manager based management in the later series.

 

Good Luck!

 

Thanks,

Vinith

When we create a CIFS File share on a NetApp cDOT 8.2.1 controller using SCVMM 2012 R2, there are no permissions which are assigned to the newly created CIFS / SMB 3.0 share, due to this we are unable to copy any files to the share as the newly created share lacks read/write access.

 

Let me illustrate this with an example, below you can see a CIFS share (fileshare_smbsharehyperv) which i created on my fileshare Hyper_V_PR_Quorum using SCVMM2012R2 and SMI-S integration.

 

Untitled.png

 

When i try accessing the share via a network path, the share is inaccessible and i receive an access denied error.

 

Untitled.png

 

Now lets examine the permissions of the share from ONTAP perspective using NetApp OnCommand System Manager, here you can see that there are no permissions assigned to the share and hence its rendered inaccessible.

 

Untitled.png

 

Now let me try assigning this share to My Hyper-V cluster for my Hyper-V over SMB environment. As you can see below the Job status fails an i receive an access denied error.

 

Untitled.png

Now lets again look at the share from the ONTAP perspective using NetApp OnCommand System Manager, here you can see that there are a new set of permissions assigned to the share which includes access rights to the owner account which initiated the JOB and both the computer accounts of the Hyper-V Cluster, the account which is not listed here is the VMM service account.

 

Untitled.png

 

So let me provide the VMM service account share access control, you can do this using the powershell tool kit cmdlets / via system manager, lets modify the access level to full control.

 

Untitled.png

 

Next lets restart the failed job in SCVMM, you would see that the JOB and its Sub-JOBS get successfully completed and now SCVMM has full access to the file share, similarly you can configure and  add a CIFS SMB 3.0 share created in SCVMM as a library share directly in SCVMM 2012 R2.

 

Untitled.png

 

Untitled.png

 

Note :- Make sure that the File share resolves both from IP and Name from all the Hosts which are part of the Hyper-V Cluster, DNS resolving issues add up to variety of issues faced with Hyper-V over SMB deployment.

 

Next try creating a VM from a vhdx file placed on a CIFS library share and enjoy the magic of ODX based provisioning.

 

Untitled.png

 

 

I hope that you have enjoyed this blog entry and have found this information helpful.

 

Good Luck!

 

Thanks,

Vinith

I hope you all enjoyed my series of blog posts on how to leverage SCVMM - SMI-S integration to effectively manage your Hyper-V Storage environment. In this blog post i would demonstrate how you can use a powershell script / function which leverages SCVMM powershell cmdlets to rapidly provision Virtual Machines using SCVMM SAN-COPY based VMM templates.

 

Prerequisites for this PowerShell script require that you set up SCVMM SAN-COPY based VMM templates.

 

For more details on how to set it up,  refer to "Data ONTAP SMI-S Agent 5.1 - Use NetApp SMI-S Provider for SAN-Copy based VM Rapid provisioning using SCVMM 2012 R2".

 

In the below figure, you can see the vhdx file used by the VMM template, which shows that the "SAN Copy capable" property is set to true.

 

1.png

 

The below figure shows the VMM template, which contains the above vhdx file.

 

1.png

 

Next, let me show you the powershell script which works out this magic

 

# Extract the VMM Template details which contains vhdx file which is SAN Copy Capable
$VMTemplate = Get-SCVMTemplate | ?{$_.name -match "VM-SAN-COPY-Template"}

# Extract the Hyper-V Host details which would be used to place the VM's
$VMHost = Get-SCVMHost -ComputerName "ms-w2k12-1.virtualcloud.com"

# Below we rapid provision 100 VM's with similar Virtual Machine Configuration, once the VM's get provisioned they would also get powered on.
1..100 | %  {


$virtualMachineConfiguration = New-SCVMConfiguration -VMTemplate $VMtemplate -Name "VM-SANCOPY-$_"
Write-Output $virtualMachineConfiguration
Set-SCVMConfiguration -VMConfiguration $virtualMachineConfiguration -VMHost $vmHost
Update-SCVMConfiguration -VMConfiguration $virtualMachineConfiguration
Set-SCVMConfiguration -VMConfiguration $virtualMachineConfiguration -VMLocation "E:\" -PinVMLocation $true
Update-SCVMConfiguration -VMConfiguration $virtualMachineConfiguration
New-SCVirtualMachine -Name "VM-SANCOPY-$_" -VMConfiguration $virtualMachineConfiguration 
Start-SCVirtualMachine -VM "VM-SANCOPY-$_"

}

 

My current setup consists of a 35GB vhdx file. As you can see in the below figure, the time taken to provision a single VM is about 16 seconds.

 

1.png

 

Once we get the 100 VM's provisioned and Powered on, lets check the time it took to provision these 100 VM's

 

1.png

 

As you can see below, I have wrapped a Measure-Command cmdlet around my script to measure the time take to provision 100 VM's. It can be seen that it took only 38 minutes and 31 seconds to complete.

 

1.png

 

Guys, so think of the possibilities, if i created these 100 VM's using BITS copy of the vhdx files, it would take about 1 hour per VM .

 

This technology can be leveraged for a large scale VDI deployment. You can also invoke a set of powershell scripts which carry out configuration activities when the VM boots up, for example to configure IP addresses from a SCVMM IP Address Pool.

 

I hope that you have enjoyed this blog entry and have found this information helpful.

 

Good Luck!

 

Thanks,

Vinith

Simply put, RDMA allows a device to see use the network pipe fully.

This is done in a few ways;

  • the first trick is RDMA uses is to allow the HBA to directly access memory, in this way to processor pin an address range and can give that address range
    to transfer to the remote side and let the card retrieve the memory directly. This has the effect of lowering CPU utilization greatly for large trasnfers. Once the transfer is complete the memory can be unpinned.
  • The second trick would be to let the card use custom silicon to handel the entire stack, and optimize that stack to these RDMA transfers. There are three layers of offload; TCPIP/Ethernet, Custom Headers over Ethernet, and Custom Headers over Infinniband.
  • The third trick would be to use use low latency and high bandwidth switches, which in their own right increase any traffic flow.

 

This concept and these methods are not new, consider the following quote;

"RDMA implements a Transport Protocol in the NIC hardware and supports Zero-Copy Networking, which makes it possible to read data directly
from the main memory of one computer and write that data directly to the main memory of another computer. RDMA has proven useful in apps..."

-This quote was written in 2005 (http://searchstorage.techtarget.com/definition/Remote-Direct-Memory-Access), and repeated every tech session since as if its a revelation.

 

There are 3 main varients of RDMA, these are;
  • iWARP, is the IETF RDMA Standard. Switch can be any high speed ethernet switch. This basically offloads the TCP stack directly onto the card and runs across a Ethernet Network.
  • RoCE, requires Converged Enhanced switches (CE) also called Data Center Bridging (DCB) switches so lossless/pause/priority flow control (PFC) are supported. This protocol runs a simpler stack that is NOT TCP based, but still runs on a Ethernet network, much like FCoE does.
  • Infiniband, required custom host adapters as well as custom switches. 

And the problem is that the miss-information prevails due to RDMA vendors that sell hardware have a vested interest in their own bet, and take shots at eachother publically;

One of the few voices out there that we would think is independant would be Microsoft, since they support all three, but
consider that Microsoft has a few different positions here as well. The developer side of Microsoft wants to support all three with the least common
denominator of features, in which case they write to iWarp. The Production and Operations side of the house however wants some of those advanced features like lossless and PFC, while the group that builds impressive demos and marketing material love to tout the maximum performance without regard to actual datacenter needs.

Lets let you see some realistic numbers. These are averages from CDW and NewEgg

TypeProtocolSpeedsSwitch RequirementsRoutableCost (NIC + Switch) per port
RDMAiWARP10gNo Requirements, but will work on DCBYes$900 (+$275 )
RDMARoCE10g-->40gDCBNo$1000 (+$500)
RDMAInfiniband12g-->54gInfinibandNo$1500 (+$300)
10g E10gNo Requirements, but will work on DCBYes$400 (+$275)
FCoE10gDCBYes$500 (+$500)
FC8g-->16gFibre ChannelYes$800-->$1600 (+$500)

You probably also have a few requirements when it comes to your servers, the first of which is reliability. You likely require that you have multiple connections
from the Server to your production network, multiple connections to your storage, and multiple connections to your peer servers.

  • Sample Demo Rig (high performance, no cost constrants, no consideration for simplicity/reliability/scalability)
    • 2 x 10g for Production network, 2 x 54g IB for peer, and 2 x 16g FC for storage but this is a lot of wires and networks to manage.
  • Highly scalable/deployable setup (good performance, flexibility if needs change, less to manage, software defined QoS to support needs)
    • 2 x 10g FCoE with QOS for production network and Storage access, 1 x 10g RoCE for peer servers and since they can all live on DCB
      type switches there is only one set of networks to manage. I would only need a single RoCE adapter since SMB would shed traffic to the non-RDMA adapater if a failure happened on the RDMA network.

You may find the difference in performance between the two above sample designs can favor the simpler design by simply using the next faster step of processor, or slightly more system memory. People generally underestimate what a well designed 10g Ethernet on DCB switching can do.

Do you want to deploy Converged Ethernet switching knowing that it will support both iWARP or RoCE, or purchase non-CE ethernet switches as well as Infiniband switches to support. Are you currently depoying iSCSI using software based initiators, you may find a significant CPU reduction by moving to a FCoE type connection since all of that work is then moved off to the FCoE card.

 

I really want to hear your experiences with RDMA, have you had good/neutral/bad expereiences with them in a production enviornment.

I hope you all enjoyed my blog on how to "Use NetApp SMI-S provider for Rapid provisioning VM's using SAN Copy". In this blog post , i would demonstrate how can use NetApp SMI-S Provider for Migrating virtual machine storage using SAN copy.

 

For VMs, which are running on a cluster SCVMM will use live migration to move VMs across cluster, SAN migration is used to Move VMs across different clusters or between standalone hosts.

Let us see an example that shows how we can migrate VMs across standalone hosts using VM migration and SAN migration process.

 

Open the SCVMM console and select the VM placed on standalone host, right-click it and select Migrate Virtual Machine.

 

Untitled.png

Select the appropriate standalone host and click Next, you would be able to see the Transfer Type as SAN and click Next.

 

Untitled.png

Click Browse and select the desired destination folder, click OK, and then click Next.

 

You would be able to see that the destination drive would have (SAN Migration Capable) in the description page.

 

Untitled.png

Review the summary page and click Move to migrate the VM, Monitor the status of the job, and see that it completes successfully, you would be able to see that in the deploy file named job status the transfer type would be listed as using SAN transfer.

 

Untitled.png

 

I hope that you have enjoyed this blog entry and have found this information helpful.

 

Good Luck!

 

Thanks,

Vinith

More

Categories