If you have read any of the previous articles on this site the intent is to dig through all of the marketecture and slideware “my terms” and tell you what things are, how they work and how you might effectively use them. The articles are focused exclusively on the Ethernet Storage topics. My history in the industry has been primarily with Cisco but I have worked with other networking products as well. Most of the posts on this site will focus on the configuration with Cisco solutions but if a configuration example I have provided is intended to be used with another product then I hope to give you enough information to understand how that might work. If you still have questions, fire off an email to me and I will be happy to respond.
Back to our topic today. There are several terminologies out there for building a EtherChannel which spans multiple switches. New terms pop up and immediately everyone wonders if they are supported by NetApp. The very short answer is that all EtherChannels which are compliant with 802.3ad LACP and/or Static EtherChannels are 100% supported by NetApp. We write our technologies to work with the standard not with a product which is packaged from any one company. In all of the examples today we are going to focus on multiple switch EtherChannel technologies from Cisco but any EtherChannel technology from any company which conforms to the statement in bold is supported by NetApp for multimode VIF data-serving workloads.
NetApp only certifies Ethernet Switches for its ONTAP GX and ONTAP 8 Cluster-Mode solutions. If you happen to be running either of these ONTAP releases then the statement of support is modified to the switching platforms which have been certified for those releases.
If you read The MultiMode VIF Survival Guide you learned about how EtherChannels work and really how the technology started out as a switch to switch or router connectivity solution.
The technology evolved to support other devices like servers and appliances. The beginning days of EtherChannel were great but still left us wanting more. The primary areas of needed improvement were the following.
1.) Better load-balancing algorithms
2.) Switch Diversity or Switch Spanning Support for active load-balancing and failover purposes.
The load-balancing algorithms we have seen evolve over the years to the point where if you use EtherChannels we can leverage many different features to exploit load-balancing functions to best meet your performance requirements. In an upcoming post we will discuss a new load-balancing algorithm available in Data ONTAP 7.3.2. Stay tuned for that one I think you will be interested in it.
Switch diversity is our topic today. The technologies mentioned at the start of the article are intended to address this area of needed functionality.
The problem was that when we built an EtherChannel we were required to terminate all physical links into a single ethernet switch. If we wanted that EtherChannel to be terminated into another ethernet switch then we were required to build a second EtherChannel and establish some means of active and passive function to these two channels.
NetApp’s answer to this was the single mode VIF. We allow you to take two EtherChannels and roll them into a Second Level Virtual Interface, which establishes one channel as active and one as passive. If all links in the active EtherChannel stop functioning, the single mode VIF will activate the standby EtherChannel without any service interruption. The fundamental problem with this is that those standby interfaces are passive and not in use. A popular show in the 90s was Home Improvement and Tim the Tool-man Taylor always used to say we need “MORE POWER”. In many networking minds, it is a waste to have 1Gbps or 10Gbps interfaces just waiting to be used on the chance a failure occurs.
The reason why we had to do this is because of the way the EtherChannels work. The physical interfaces which were bonded to create the EtherChannel create a Virtual MAC address. That virtual MAC address exists on both end points. If we are referring to a NetApp-to-Switch connection then there is a Virtual MAC for the VIF interface on the NetApp and a virtual MAC address for the Switch. LACP actually uses an additional identifier in the LACP system-id. You must be capable of sharing the Virtual MAC address and/or LACP system-id between two independent chassis for a multiple switch EtherChannel to function.
To accommodate diverse chassis EtherChannels, manufactures simply had an engineering problem to solve. They had to figure out how to get the system-id and the virtual MAC address over to the remote chassis and ensure that those IDs can be actively used by both sides. Solving that problem enabled the capability to support the diverse chassis EtherChannel feature. In most all scenarios the answer is to cluster the physical switches into one logical switch but recently there has been a new concept introduced which we will cover that later in the article.
Each of these technologies all leverage different engineering techniques to get that virtual MAC addresses and LACP system-ids to the other chassis.
These technologies from Cisco are:
Multi-Chassis EtherChannels (MEC)
Virtual Port Channels (vPC)
Now why all of the different names? The truth is in the first statement, they each use different engineering techniques that are specific to a particular product family. Each accomplish the same functionality from the perspective of a NetApp controller, they are simply named differently because of the different architectures and the solutions they represent.
So lets go through them.
Cross-Stack EtherChannels - This refers to the Catalyst 3750 line of switches. The Catalyst 3750 uses StackWise technology which is enabled by connecting multiple switches in “the stack” together through a high bandwidth stacking port and cable engineered specifically for this function.
This technology is effectively clustering the switches in the stack together and supports a maximum of 9 units in a stack. The stacked switches establish a cluster by electing a administratively defined stack master and if the stack master fails a new stack master is elected. Configurations of all switches can be modified and saved from the master switch. All configurations are synchronized between all switches in the stack based on the masters configuration.
The configuration in a stack of Catalyst 3750s uses the following logic. Lets assume we have four switches in our stack as seen in the picture here.
Each switch within the stack is assigned a chassis number, due to the fact that Catalyst 3700 Series switches are generally fixed configuration switches and only contain one module which is designated by the number zero. Each port is numbered starting from 1 through the total number of ports in that chassis.
If we were to modify the settings of Gigabit Ethernet port 24, on switch 3, you would enter the following commands.
interface gigabit 3/0/24
The 3 designates the chassis number in the stack
The 0 designates the module of which there is only 1 so the number is zero
The 24 designates the port.
NOTE: The first module in most Cisco devices is designated by the number 0. The first port in a Cisco switch is designated by the number 1. The first port in a Cisco router is designated by a Zero, unless it is a VLAN interface which always starts with 1. Did you get all that, there will be a test.
The virtual MAC addresses and LACP device-id used for Cross-Stack EtherChannels is the MAC address of the stack master. “If the stack master fails and a new stack master is elected a new stack MAC address and LACP device-id will be used.”
This new MAC address and LACP device-id will cause the EtherChannel(s) to flap. You can prevent this from occurring by entering the following command.
stack-mac persistent timer 0
This command forces the new stack master to maintain the old master MAC address and LACP device-id.
Cross-Stack EtherChannels support LACP, Static EtherChannels and PAgP. It is always suggested that you use LACP when possible, thus the NetApp recommendation is to use LACP as the protocol of choice when creating your multimode VIFs.
NOTE: PAgP is a Cisco proprietary protocol which is not supported by NetApp or anyone else except Cisco.
Multi-Chassis EtherChannel (MEC) - This technology is used in the Catalyst 6500 Series when two Catalyst 6500s are configured as a VSS 1440 (Virtual Switching System). Many people often wonder what the 1440 stands for. That is is the system bandwidth scalability 1.440 Tbps (Terabits Per Second).
The VSS is a feature introduced with the Supervisor 720-10G and IOS release 12.2(33)SXH made generally available in August of 2007. The VSS system is comprised of two Catalyst 6500 chassis, each with a Supervisor 720-10G and only WS-X6700 Series Line-cards.
The two chassis are interconnected by 10Gbps connections at the Supervisor Engine and optionally through the 8 Port 10G Linecard (WS-X6708-10G3C/XL). This interconnection is refereed to as the Virtual Switch Link (VSL). The VSL link enables the two chassis to be clustered together into a single logical node.
Once the system has been clustered a supervisor in one chassis is elected as the active supervisor and the other chassis supervisor is elected the standby. If your Catalyst 6500s are 9 Slot 6509 chassis your configuration becomes a 18 slot 6500 as the system is configured as one logical system. Similar to how the Catalyst 3700 addresses each interface index. The following interface command would navigate you to 10Gbps port 1 on Module 5 in Chassis 2.
interface TenGigabitEthernet 2/5/1
With the VSS we are typically dealing with very high speed 10Gbps interfaces. If you are using 10Gbps interfaces in a 802.3ad LACP or Static EtherChannel there is a need to modify the load-balancing mechanism which rebuilds the hash index upon a physical port being added or removed (fail) from a Multi-Chassis EtherChannel. This modification is required because of the large amount of traffic which can be transmitted on a 10Gbps interface in a brief period of time.
In the event of a port being added or removed (failed) from a Multi-Chassis EtherChannel the load value is reset on all ports. A new load value is distributed to all ports in the Multi-Chassis EtherChannel and reprogrammed into the port ASIC for each port. This brief ASIC update causes packet loss for 200-300 msec. That brief timeframe is acceptable for slower speed 1Gbps interfaces but sometimes problematic for high speed 10Gbps interfaces because of the large volume of data in transit for that brief period of time.
This scenario led to Cisco developing an enhanced load distribution mechanism such that when ports are added or removed to/from a Multi-Chassis EtherChannel, the load result does not need to be reset on existing member ports.
The method to enable this new hash distribution algorithm on a VSS is the following.
Option 1: For all EtherChannels in the VSS
vss(config)# port-channel hash-distribution adaptive
Option 2: For a individual EtherChannel in the VSS
vss(config)# interface port-channel (number)
vss(config-if)#port-channel port hash-distribution adaptive
Multi-Chassis EtherChannels support LACP, Static EtherChannels and PAgP. It is always suggested that you use LACP when possible, thus the NetApp recommendation is to use LACP as the protocol of choice when creating your multimode VIFs.
NOTE: PAgP is a Cisco proprietary protocol which is not supported by NetApp or anyone else except Cisco.
Virtual Port Channels (vPC) - This technology is specific to Cisco’s Nexus Series of switches. The vPC feature was initially introduced to the Nexus 7000 and in August of this year Cisco made this feature available to the Nexus 5000 and Nexus 2000 series switches. A virtual port channel (vPC) allows links that are physically connected to two different Cisco Nexus series switches to appear as a single EtherChannel by a third device. That third device can be any switch, server or any network device which supports 802.3ad LACP or Static EtherChannels. NetApp Storage Controllers fall into this category and are fully supported with Nexus Virtual Port Channels (vPC).
Virtual Port Channels (vPC) are unique in the engineering technique used to enable the technology. The Virtual Port Channel (vPC) feature actually uses the concept of domains and peering links between devices instead of system clustering.
Once a domain and peering link are established you configure Port Channels the same way you have always configured Cisco port channels, with one exception. A line to the virtual interface is added stating that the port channel belongs to a vPC domain.
interface port channel (number)
vpc domain (number)
vPCs support LACP or static EtherChannels, there is no support for PAgP. It is always suggested that you use LACP when possible thus the NetApp recommendation is to use LACP as the protocol of choice when creating your multimode VIF.
Summary - All of these technologies are features which the network manufacture has designed to deliver 802.3ad LACP and Static EtherChannels with diverse physical switch termination. The engineering technique used by each is unique to each platform but the function delivered to a switch, server or appliance is the same as if the EtherChannel were terminated in a single switch.
NetApp multimode VIFs support either LACP 802.3ad or Static EtherChannels and thus the various technologies compliance to these standards ensure their compatibility and support by NetApp.
An upcoming post will provide in detail Nexus configuration steps for enabling vPCs. However, it is important to close with the NetApp specific configuration required for any of the above technologies.
The only configuration required on a NetApp Controller to support the above technologies.
If your choice of EtherChannel technologies above cause to to enable LACP then your NetApp configuration should be as follows.
vif create lacp lacpvif -b ip e0a e0b e0c e0d
If your choice of Etherchannel technologies above cause you to enable Static EtherChannels then you NetApp configuration will be as follows.
vif create multi multivif -b ip e0a e0b e0c e0d
There is nothing special required in the configuration of any NetApp device to support EtherChannels which span multiple physical chassis.
I will close by saying that the vPC feature in the Nexus 7000 is a key network feature in NetApp's cloud-based multi-tenancy solution and the use of vPCs are presently in production with many NetApp customers and NetApp’s own internal network infrastructure. Multi-Chassis EtherChannels and Cross-Stack EtherChannels have been referenced in several NetApp technical reports and have been in production for several years with NetApp storage.
I hope this helps in your Ethernet Storage efforts,