Currently I have a FAS 3170 in active/active configuration. 4 trays of disks, each tray has 14 disks that make up a RAID-DP group. Therefore there are a total of 4 raid groups. 2 trays of disks have been assigned to each controller. Each of the controllers disks have been combined into an Aggregate, making a total of 2 aggregates in the array consisting of 28 disks per aggregate. Each aggregate contains 3 volumes for a total of 6 volumes in the array.
What I would like to achieve here is to combine all 56 disks (the 4 raid groups/trays) into a single aggregate. The aim of this is to improve read/write performance of the array by essentially doulbing the number of spindles available for io operations.
The array is soon to be taken out of production and available for a teardown/rebuild so it would be good to know if this setup is:
a) worthwhile doing
b) actually technically possible and if so, the best way to do this
c) a supported and/or recommended practice by NetApp (this isn't a big thing but good to know regardless)
Some other information that may be of use:
- The array is used only for VM's in a VMware environment.
- Each controller has 2 physical nics.
- Current disk io is very low and usually 60-70% read, 70-80% random.
Any feedback on the above would be great, thanks!
Thanks for the reply, I am aware that 2 controllers cannot share an aggregate and there is no additional 5th shelf.
I believe that in the current active/active setup I have, if a controller fails, the other controller has the ability to takeover control of the other aggregate/volumes so that everything remains accessible. What I'd like to do is maintain this setup, but assign all the disks to one controller to form a large single aggregate, then if this controller fails the other controller will have the ability to takeover the aggregate and ensure storage continuity. So technically, the active/active setup will remain, except one of the controllers just won't have much to do during array operation. Is this possible?
Understood. Yes this is possible. You need root on the one passive like node. So 3-5 drives there. 3 for root aggr and 2 spare but you could go less possibly. Then reassign all other disks to the other node or more disk on that node. Would require zeroing of the one node but you can assign disks assymetric like this. We have some customers who don't require the controller performance and do this for failover. I like leveraging both controllers but if spindles are the bottleneck, I can see doing this.
Thanks for the responses guys, always happy to get constructive feedback. I would assign 3 disks to one controller in a RAID4 group to comply with NetApp best practices.
I look at this two ways:
1. I make use of both controllers with 28 disks assigned to each. I get 15TB of capacity and decent disk IO capability. 28 disks is not alot for one of these controllers to manage, so both controllers sit around with minimal CPU activity or any signs of stress.
2. I assign 53 disks to one controller, and 3 disks to the other. I render a controller virtually unused and use it purely for HA purposes (still active/active though). I still get 15TB (close to) capacity but much greater disk IO capability! A single controller is capable of managing 420 disks so 53 should not be pushing a controller to the point of becoming a bottleneck.
If there were in excess of 10 shelves and the controllers were being moderately pushed by storage activity, then of course I remain with the current setup (option 1 above). But I don't want to actively use two controllers just "because they are there" if there are no real gains to be had.
I do wonder if I'm missing some key piece of information as to why this setup wouldn't be feasible but at the moment I'm leaning toward option 2.
I have seen it in upgrade guide for 8.0.3 (https://library.netapp.com/ecmdocs/ECMM1253884/html/upgrade/GUID-1A70BD32-D54D-443F-9E5E-C97D8E420189.html):
In Data ONTAP 8.0.2 and later releases, automatic background disk firmware updates are available for non-mirrored RAID4 aggregates, in addition to all other RAID types.
Something like this must be mentioned in release notes, but I could not find it there either.
Addition: Automatic background disk firmware updates not enabled for non-mirrored RAID4 aggregates
Automatic background disk firmware updates not enabled for non-mirrored RAID4 aggregates
This bug is marked as fixed in 7.3.7P1. Which implies that it is also supported in 7.3.7? Nothing in release notes I can find ...
Background disk firmware upgrade (BDFU) was turned on for non-mirrored RAID-4  aggregates starting with the 8.0.2 release. Unfortunately, upgrades from 7.3.x did not have the option turned on. New installations do have it on. The upgrade issue is fixed in 8.0.4.
 Up to that point BDFU is on for RAID-DP and mirrored RAID-4 aggregates.