This topic comes up, and yesterday it was similar to the others, except with a bit more humour injected.  One of my peers was working with a customer who didn’t believe large volumes / datastores (over 16TB) were possible, supported, worked properly, could be resized, etc.  I commented that it sounded like a double-dog dare, and the conversation degraded from there, eventually including “guess the movie” references and a link for the winner


Anyway, I had been doing some work for VAAI and one of the tests was with small-medium LUNs on large aggregates.  So, I still had the gear mostly set up – a FAS3140 with 6 shelves of 14x 450GB disks.  So, I loaded it up with ONTAP 8.0.1 (this works the same on 8.0) and created one big aggregate.


The things I’m doing here are pretty simple, as you will see, and can be done through CLI, the separate GUIs of vSphere and NetApp, or using the NetApp Virtual Storage Console plug-in for vCenter.


So, I have my aggregate.  I create a 10TB volume and export it.  I eliminate the snapshot reserve and schedule since they don’t apply to this test and that allows me to present even more space to VMware.



Next, connect the ESX server to it.  The wizard is vCenter is super-easy, but so is the command line.




Note that vmkfstools, which includes the equivalent of df (display free space command) shows a 10TB datastore.  Let’s have a look at vCenter.  The datastore may not show up at first, but trust me – it’s there.  Just click Refresh.




So, let’s make it bigger.




That’s it – one command.




ESX already sees the space.  In vCenter, you need to refresh again.




For grins, let’s go to the max I can do with the current aggregate.








(No, you can’t quite make it the size df -Ah shows, but pretty close.)







and vCenter




So, let’s shrink it.  But first, I’m going to put some VMs out there.  I won’t use NetApp Rapid Cloning Utility (RCU – now part of VSC 2.0) since I DON’T want space-efficient clones – I want the space actually used up.  Let me do that and I’ll be back after it’s done.



OK, I’m back.  I copied back some VMs I had archived, registered and started 3 of them, and copied in some other junk because the thin-provisioned VMs in NFS didn’t occupy much space.  So now there is 66GB of stuff in the datastore.



Let’s try shrinking.  In these two screens I have the commands and results of a series of shrink commands.  First, chop off 15TB, then 10TB.  I have a VM console window open with the system clock running so I can see if the VM stops in any way.  It’s hard to show that in a blog without recording a video, but trust me on this, the second hand ticks along at one tick per second throughout.



After removing 25TB, I try to resize it to less than the space used by the VMs and junk – and as you see it won’t let me.






vCenter is happy with all this, too, once you refresh. 



Note that you will need to refresh before creating VMs using the added capacity or you will get warnings that the datastore isn’t big enough even though it really is, but vCenter will still let you create the VM since it’s thin provisioned.


I can go on, shrinking and growing over and over, but I’m sure you get the picture.


So, to answer the original question, ESX use of very large NFS datastores is not limited by anything in ESX – it will use whatever the storage controller can present.


However, a single VMDK may not be larger than 2TB - 512B.  This is actually a limit for VMFS, but is applied to NFS datastores as well.


Share and enjoy!


On Wednesday last week, Cisco and NetApp announced their end-to-end FCoE solution and the validation of the solution by VMware.  This checks off another validation of the partnership between VMware, Cisco and NetApp .

NetApp has had FCoE target adapters for about 2 years now, and since that time has expanded the offering to include unified target cards (that support both FCoE and other IP protocols at the same time), FCoE-capable 10 GbE switches, and Converged Network Adapters (CNAs) to connect the host.  That means you can buy an end-to-end, validated FCoE solution from NetApp or your favourite NetApp VAR.

Starting from the host, NetApp carries the Brocade BR1022, Emulex OCe10102, and QLogic QLE8152 all with or without SFP+ optical modules.

For switches, while we support and sell the Brocade 8000, this article is about Cisco and from them NetApp offers the Nexus 5010 and 5020 DCB switches (along with the Nexus 2148T Fabric Extender which with mainly 1 GbE ports is not FCoE, but is part of the overall DataCenter Ethernet solution).


On the storage end, NetApp supports the dual-port FCoE Unified Target Adapter (the storage equivalent of a CNA – remember:  initiator on the host and +target +on the storage) on all FAS models with PCIe slots( FAS2050, FAS3040/3070, FAS3100 series and FAS6000 series) and their V-Series equivalent.  So far, the FAS3100 series and FAS6000 series are supported by VMware, until the team cranks through the rest of the certifications. 


In fact, as of today, the only systems from any vendor listed on VMware’s Compatibility Guide under FCoE are the NetApp FAS3100 and FAS6000.  (If that link doesn’t work, click this link , select “Storage/SAN, then next to Array Type select “Fiber Channel over Ethernet (FCoE)”, then click Search.)  Note also that the listing includes the brand-new ESX 4.1 (and therefore vSphere 4.1).

This validation demonstrates the partnership between Cisco, NetApp and VMware. 


Consider this:  You can’t really test any of these components by themselves.  They really only work as an end-to-end solution together.  You need an HBA in the server, target ports in the storage and a switch to connect them.  So, it makes perfect sense to validate an end-to-end solution using the only storage platform with an FCoE target in the storage controller.

Share and enjoy!