This topic comes up, and yesterday it was similar to the others, except with a bit more humour injected. One of my peers was working with a customer who didn’t believe large volumes / datastores (over 16TB) were possible, supported, worked properly, could be resized, etc. I commented that it sounded like a double-dog dare, and the conversation degraded from there, eventually including “guess the movie” references and a link for the winner
Anyway, I had been doing some work for VAAI and one of the tests was with small-medium LUNs on large aggregates. So, I still had the gear mostly set up – a FAS3140 with 6 shelves of 14x 450GB disks. So, I loaded it up with ONTAP 8.0.1 (this works the same on 8.0) and created one big aggregate.
The things I’m doing here are pretty simple, as you will see, and can be done through CLI, the separate GUIs of vSphere and NetApp, or using the NetApp Virtual Storage Console plug-in for vCenter.
So, I have my aggregate. I create a 10TB volume and export it. I eliminate the snapshot reserve and schedule since they don’t apply to this test and that allows me to present even more space to VMware.
Next, connect the ESX server to it. The wizard is vCenter is super-easy, but so is the command line.
Note that vmkfstools, which includes the equivalent of df (display free space command) shows a 10TB datastore. Let’s have a look at vCenter. The datastore may not show up at first, but trust me – it’s there. Just click Refresh.
So, let’s make it bigger.
That’s it – one command.
ESX already sees the space. In vCenter, you need to refresh again.
For grins, let’s go to the max I can do with the current aggregate.
(No, you can’t quite make it the size df -Ah shows, but pretty close.)
So, let’s shrink it. But first, I’m going to put some VMs out there. I won’t use NetApp Rapid Cloning Utility (RCU – now part of VSC 2.0) since I DON’T want space-efficient clones – I want the space actually used up. Let me do that and I’ll be back after it’s done.
OK, I’m back. I copied back some VMs I had archived, registered and started 3 of them, and copied in some other junk because the thin-provisioned VMs in NFS didn’t occupy much space. So now there is 66GB of stuff in the datastore.
Let’s try shrinking. In these two screens I have the commands and results of a series of shrink commands. First, chop off 15TB, then 10TB. I have a VM console window open with the system clock running so I can see if the VM stops in any way. It’s hard to show that in a blog without recording a video, but trust me on this, the second hand ticks along at one tick per second throughout.
After removing 25TB, I try to resize it to less than the space used by the VMs and junk – and as you see it won’t let me.
vCenter is happy with all this, too, once you refresh.
Note that you will need to refresh before creating VMs using the added capacity or you will get warnings that the datastore isn’t big enough even though it really is, but vCenter will still let you create the VM since it’s thin provisioned.
I can go on, shrinking and growing over and over, but I’m sure you get the picture.
So, to answer the original question, ESX use of very large NFS datastores is not limited by anything in ESX – it will use whatever the storage controller can present.
However, a single VMDK may not be larger than 2TB - 512B. This is actually a limit for VMFS, but is applied to NFS datastores as well.
Share and enjoy!