Just curious if anybody has ever used a NetApp V-Series system to virtualize a 3rd party SSD array? We are seeing some issues during times of high IO on the data being hosted on the 3rd party SSD LUNs.
We have two aggregates hosted on a v3240:
aggr0 - 44 NetApp SAS disks
aggr1_ssd - made up of SSD backed luns from a Clariion CX4.
Protocol: Fiber Channel between host and NetApp. FCP also used between NetApp and Clariion.
It appears that NVRAM is flushing data to aggr1_ssd too quickly during times of high load. We know the write cache on the Clariion is smaller than NetApps NVRAM. As a result, we suspect that NetApp's NVRAM is filling up the Clariions write cache before the Clariion finishes flushing its write cache to disk. As a result, we see latency across all workloads on the v3240 (including those hosted on aggr0) because NVRAM is shared by both aggregates.
We are entertaining the thought of disabling the write cache on the Clariion but dont know if that will make the problem worse. It would be great if we could configure all writes to aggr1_ssd to bypass NVRAM. As far as I know, that is not possible.
We just opened a case with EMC to analyze our NAR files but I thought I'd see if anybody else has any experience with virtualizing 3rd party SSD on a vSeries machine. We also have IO/throughput maximums for the CX4 but are still waiting for the results of the NAR file analysis to determine if we are hitting any bottlenecks there.