This is my first discussion and I wish to do an smart question . I have tried to find on this forum but I can't see any related discussion.
It's clear that exists a limitation where an esx host only can see 255 LUNś. We use to work with Raw Device Mapping capabilities.
In our first approach, we have a NetApp storage device with an unique iGroup, then it seams normal that we start to ahve problems when we create more than 255 LUN's.
There is something pending to test but it seams to me that the better startegy is to separate LUN's in different iGroups
Is that correct?
Exists some clear Best Practice about how organize your iGroups and LUN's to handle with this limtiation? (I don't want to lose HA, vMotion capabilities)
Thanks for your quick response
I don't want to connect more than 255 in an ESX. I want to use more than 255 in an ESX cluster
I explain my environment with an example, it's easy (Maybe for this I have problems)
I have a NetAPp where I have created a single iGroup called iGroupExample.
I have a vCenter with 10 vsphere servers
All 10 vsphere servers are included in the iGroupExample
I have created 300 LUNs in the iGroupExample
Then... the problem
When I want to assign LUN number 270 as a RDM in a VM on vSphere 1 --> I can't see the volume becase ESX can show more than 255 volumes in the host
An alternative of this problem I thought:
- I can create 2 iGroups iGroupExample1 and iGroupExample2
- 5 vspheres are going to be allowed on iGroupExample1 and the other 5 vspheres on iGroupExample2
Then managed by iGroupExample1 I create 150 LUNs and for iGroupExample2 I can create 150 LUNS
It seams that with this model then, for 5 first servers I can manage 150 LUNS and for the other 5 servers I can manage the other 150 LUNS
is that assmption correct?
The problem it's that I have all this 10 servers in the same cluster, then, my opeartors should be able to vMotion the VM in any of the 10 servers. But I can't because this restrictions
Is there any way to handle this problem? Or maybe I am doing something wrong in my infrastructure design?
You can only even have 255 LUN mounted to any given ESX server. Thats an ESX limit. However you really should have all LUNs connected to all ESX servers in a cluster (VMware best practice) otherwise it can become very confusing to try to move VMs around using VMotion as you will have hosts certain VMs can't get to.
I think you have 2 options.
First, use fewer but larger LUNs. Why so many LUNs? With VMFS 5 you can have very large LUNS with hundreds of VMs with no performance loss, so why not reduce the number of LUNs. Besides you will find host reboots to be very slow with 255 LUNs, long HBA rescans.
Or you could break your 11 hosts into multiple smaller clusters then zone the LUNs as you described. Not really any point in having a large Cluster if all the LUNs are not presented to all hosts.
We're having similar concerns with the number of LUN's per ESX cluster. To answer the question about the number of LUN's, one case in point is with SAP (Windows). In order to meet best practice (SAP and NetApp's for SM SAP) we need 7 RDM LUN's per SAP instance (sapdata1-4, origlogA-B, oraarch). We've moved as much as we can onto vmdk's, but 7 seems to be the least we can have in our environment. As a result we are limited to around 37 SAP instances per cluster, which is not many when you account for dev/test and clones used for verifications, etc.
The 256 LUN limit is one imposed by VMware and I'm not sure if they plan on increasing it soon. You should relay this back to your VMware account teams.
One other thing you could do is leverage NFS for your general purpose VM datastores (thus saving you a few LUNs) or map some of the LUNs directly to the VMs using ISCSI again saving some of the 256 LUNs the ESX cluster can see. You of course will have to see if these would work in your environments but the 256 is a hard limit that we can't effect.