post4picture1.png

 

 

 

 

If you have read any of the previous articles on this site the intent is to dig through all of the marketecture and slideware “my terms” and tell you what things are, how they work and how you might effectively use them.  The articles are focused exclusively on the Ethernet Storage topics.   My history in the industry has been primarily with Cisco but I have worked with other networking products as well.   Most of the posts on this site will focus on the configuration with Cisco solutions but if a configuration example I have provided is intended to be used with another product then I hope to give you enough information to understand how that might work.   If you still have questions,  fire off an email to me and I will be happy to respond.

 

Back to our topic today.  There are several terminologies out there for building a EtherChannel which spans multiple switches.  New terms pop up and immediately everyone wonders if they are supported by NetApp.    The very short answer is that all EtherChannels which are compliant with 802.3ad LACP and/or Static EtherChannels are 100% supported by NetApp.  We write our technologies to work with the standard not with a product which is packaged from any one company.  In all of the examples today we are going to focus on multiple switch EtherChannel technologies from Cisco but any EtherChannel technology from any company which conforms to the statement in bold is supported by NetApp for multimode VIF data-serving workloads. 

 

NetApp only certifies Ethernet Switches for its ONTAP GX and ONTAP 8 Cluster-Mode solutions.  If you happen to be running either of these ONTAP releases then the statement of support is modified to the switching platforms which have been certified for those releases.    

 

If you read The MultiMode VIF Survival Guide you learned about how EtherChannels work and really how the technology started out as a switch to switch or router connectivity solution. 

 

The technology evolved to support other devices like servers and appliances.  The beginning days of EtherChannel were great but still left us wanting more.   The primary areas of needed improvement were the following.

 

1.) Better load-balancing algorithms

2.) Switch Diversity or Switch Spanning Support for active load-balancing and failover purposes.

 

The load-balancing algorithms we have seen evolve over the years to the point where if you use EtherChannels we can leverage many different features to exploit load-balancing functions to best meet your performance requirements.  In an upcoming post we will discuss a new load-balancing algorithm available in Data ONTAP 7.3.2.  Stay tuned for that one I think you will be interested in it.

 

Switch diversity is our topic today.   The technologies mentioned at the start of the article are intended to address this area of needed functionality.

 

The problem was that when we built an EtherChannel we were required to terminate all physical links into a single ethernet switch.  If we wanted that EtherChannel to be terminated into another ethernet switch then we were required to build a second EtherChannel and establish some means of active and passive function to these two channels.   

 

NetApp’s answer to this was the single mode VIF.   We allow you to take two EtherChannels and roll them into a Second Level Virtual Interface, which establishes one channel as active and one as passive.  If all links in the active EtherChannel stop functioning, the single mode VIF will activate the standby EtherChannel without any service interruption.   The fundamental problem with this is that those standby interfaces are passive and not in use.  A popular show in the 90s was Home Improvement and Tim the Tool-man Taylor always used to say we need “MORE POWER”.   In many networking minds, it is a waste to have 1Gbps or 10Gbps interfaces just waiting to be used on the chance a failure occurs.  

 

The reason why we had to do this is because of the way the EtherChannels work.   The physical interfaces which were bonded to create the EtherChannel create a Virtual MAC address.   That virtual MAC address exists on both end points.   If we are referring to a NetApp-to-Switch connection then there is a Virtual MAC for the VIF interface on the NetApp and a virtual MAC address for the Switch.   LACP actually uses an additional identifier in the LACP system-id.  You must be capable of sharing the Virtual MAC address and/or LACP system-id between two independent chassis for a multiple switch EtherChannel to function.

 

To accommodate diverse chassis EtherChannels, manufactures simply had an engineering problem to solve. They had to figure out how to get the system-id and the virtual MAC address over to the remote chassis and ensure that those IDs can be actively used by both sides.  Solving that problem enabled the capability to support the diverse chassis EtherChannel feature.  In most all scenarios the answer is to cluster the physical switches into one logical switch but recently there has been a new concept introduced which we will cover that later in the article.

 

Each of these technologies all leverage different engineering techniques to get that virtual MAC addresses and LACP system-ids to the other chassis.

 

These technologies from Cisco are:

 

Cross-Stack EtherChannels

Multi-Chassis EtherChannels (MEC)

Virtual Port Channels (vPC)

 

Now why all of the different names?   The truth is in the first statement, they each use different engineering techniques that are specific to a particular product family.   Each accomplish the same functionality from the perspective of a NetApp controller, they are simply named differently because of the different architectures and the solutions they represent.  

 

So lets go through them.

 

 

Cross-Stack EtherChannels   -  This refers to the Catalyst 3750 line of switches.  The Catalyst 3750 uses StackWise technology which is enabled by connecting multiple switches in “the stack” together through a high bandwidth stacking port and cable engineered specifically for this function.  

 

This technology is effectively clustering the switches in the stack together and supports a maximum of 9 units in a stack.  The stacked switches establish a cluster by electing a administratively defined stack master and if the stack master fails a new stack master is elected.  Configurations of all switches can be modified and saved from the master switch.  All configurations are synchronized between all switches in the stack based on the masters configuration. 

 

The configuration in a stack of Catalyst 3750s uses the following logic.  Lets assume we have four switches in our stack as seen in the picture here.

 

post4picture2.png

 

 

 

Each switch within the stack is assigned a chassis number, due to the fact that Catalyst 3700 Series switches are generally fixed configuration switches and only contain one module which is designated by the number zero.  Each port is numbered starting from 1 through the total number of ports in that chassis.

 

If we were to modify the settings of Gigabit Ethernet port 24, on switch 3, you would enter the following commands.

 

interface gigabit 3/0/24

 

The 3 designates the chassis number in the stack

The 0 designates the module of which there is only 1 so the number is zero

The 24 designates the port.

 

NOTE:  The first module in most Cisco devices is designated by the number 0.  The first port in a Cisco switch is designated by the number 1.  The first port in a Cisco router is designated by a Zero, unless it is a VLAN interface which always starts with 1.  Did you get all that, there will be a test.

 

The virtual MAC addresses and LACP device-id used for Cross-Stack EtherChannels is the MAC address of the stack master.  “If the stack master fails and a new stack master is elected a new stack MAC address and LACP device-id will be used.” 

 

This new MAC address and LACP device-id will cause the EtherChannel(s) to flap.   You can prevent this from occurring by entering the following command.

 

stack-mac persistent timer 0

 

This command forces the new stack master to maintain the old master MAC address and LACP device-id.

 

Cross-Stack EtherChannels support LACP, Static EtherChannels and PAgP.  It is always suggested that you use LACP when possible, thus the NetApp recommendation is to use LACP as the protocol of choice when creating your multimode VIFs.

 

NOTE: PAgP is a Cisco proprietary protocol which is not supported by NetApp or anyone else except Cisco.

 

 

Multi-Chassis EtherChannel (MEC)   -  This technology is used in the Catalyst 6500 Series when two Catalyst 6500s are configured as a VSS 1440 (Virtual Switching System).  Many people often wonder what the 1440 stands for.  That is is the system bandwidth scalability 1.440 Tbps (Terabits Per Second).

 

post4picture3.png

 

 

The VSS is a feature introduced with the Supervisor 720-10G and IOS release 12.2(33)SXH made generally available in August of 2007.  The VSS system is comprised of two Catalyst 6500 chassis, each with a Supervisor 720-10G and only WS-X6700 Series Line-cards. 

 

The two chassis are interconnected by 10Gbps connections at the Supervisor Engine and optionally through the 8 Port 10G Linecard (WS-X6708-10G3C/XL).   This interconnection is refereed to as the Virtual Switch Link (VSL).  The VSL link enables the two chassis to be clustered together into a single logical node. 

 

Once the system has been clustered a supervisor in one chassis is elected as the active supervisor and the other chassis supervisor is elected the standby.   If your Catalyst 6500s are 9 Slot 6509 chassis your configuration becomes a 18 slot 6500 as the system is configured as one logical system.  Similar to how the Catalyst 3700 addresses each interface index.   The following interface command would navigate you to 10Gbps port 1 on Module 5 in Chassis 2.

 

interface TenGigabitEthernet 2/5/1

 

With the VSS we are typically dealing with very high speed 10Gbps interfaces.  If you are using 10Gbps interfaces in a 802.3ad LACP or Static EtherChannel there is a need to modify the load-balancing mechanism which rebuilds the hash index upon a physical port being added or removed (fail) from a Multi-Chassis EtherChannel.  This modification is required because of the large amount of traffic which can be transmitted on a 10Gbps interface in a brief period of time.  

 

In the event of a port being added or removed (failed) from a Multi-Chassis EtherChannel the load value is reset on all ports.  A new load value is distributed to all ports in the Multi-Chassis EtherChannel and reprogrammed into the port ASIC for each port.  This brief ASIC update causes packet loss for 200-300 msec.  That brief timeframe is acceptable for slower speed 1Gbps interfaces but sometimes problematic for high speed 10Gbps interfaces because of the large volume of data in transit for that brief period of time. 

 

This scenario led to Cisco developing an enhanced load distribution mechanism such that when ports are added or removed to/from a Multi-Chassis EtherChannel, the load result does not need to be reset on existing member ports.

 

The method to enable this new hash distribution algorithm on a VSS is the following.

 

Option 1:  For all EtherChannels in the VSS

 

vss(config)# port-channel hash-distribution adaptive

 

Option 2: For a individual EtherChannel in the VSS

 

vss(config)# interface port-channel (number)

vss(config-if)#port-channel port hash-distribution adaptive

 

Multi-Chassis EtherChannels support LACP, Static EtherChannels and PAgP.  It is always suggested that you use LACP when possible, thus the NetApp recommendation is to use LACP as the protocol of choice when creating your multimode VIFs.

 

NOTE: PAgP is a Cisco proprietary protocol which is not supported by NetApp or anyone else except Cisco.

 

Virtual Port Channels (vPC)  -  This technology is specific to Cisco’s Nexus Series of switches.  The vPC feature was initially introduced to the Nexus 7000 and in August of this year Cisco made this feature available to the Nexus 5000 and Nexus 2000 series switches.  A virtual port channel (vPC) allows links that are physically connected to two different Cisco Nexus series switches to appear as a single EtherChannel by a third device.  That third device can be any switch, server or any network device which supports 802.3ad LACP or Static EtherChannels. NetApp Storage Controllers fall into this category and are fully supported with Nexus Virtual Port Channels (vPC).  

 

 

post4picture4.png

Virtual Port Channels (vPC) are unique in the engineering technique used to enable the technology. The Virtual Port Channel (vPC) feature actually uses the concept of domains and peering links between devices instead of system clustering.  

 

Once a domain and peering link are established you configure Port Channels the same way you have always configured Cisco port channels, with one exception.   A line to the virtual interface is added stating that the port channel belongs to a vPC domain.

 

interface port channel (number)

vpc domain (number)

 

vPCs support LACP or static EtherChannels, there is no support for PAgP.  It is always suggested that you use LACP when possible thus the NetApp recommendation is to use LACP as the protocol of choice when creating your multimode VIF.

 

Summary  - All of these technologies are features which the network manufacture has designed to deliver 802.3ad LACP and Static EtherChannels with diverse physical switch termination.  The engineering technique used by each is unique to each platform but the function delivered to a switch, server or appliance is the same as if the EtherChannel were terminated in a single switch.  

 

NetApp multimode VIFs support either LACP 802.3ad or Static EtherChannels and thus the various technologies compliance to these standards ensure their compatibility and support by NetApp. 

 

An upcoming post will provide in detail Nexus configuration steps for enabling vPCs.  However, it is important to close with the NetApp specific configuration required for any of the above technologies.


 

 

The only configuration required on a NetApp Controller to support the above technologies.

 

If your choice of EtherChannel technologies above cause to to enable LACP then your NetApp configuration should be as follows.

vif create lacp lacpvif -b ip e0a e0b e0c e0d

 

If your choice of Etherchannel technologies above cause you to enable Static EtherChannels then you NetApp configuration will be as follows.

vif create multi multivif -b ip e0a e0b e0c e0d

 

 

There is nothing special required in the configuration of any NetApp device to support EtherChannels which span multiple physical chassis.

 

I will close by saying that the vPC feature in the Nexus 7000 is a key network feature in NetApp's cloud-based multi-tenancy solution and the use of vPCs are presently in production with many NetApp customers and NetApp’s own internal network infrastructure.  Multi-Chassis EtherChannels and Cross-Stack EtherChannels have been referenced in several NetApp technical reports and have been in production for several years with NetApp storage.

 

 

I hope this helps in your Ethernet Storage efforts,

Trey

 

The Nexus 5000 has been shipping from Cisco and NetApp for sometime.  NetApp sells the Nexus 5000 to customers who are deploying servers with unified I/O supporting both Fibre Channel and Traditional Ethernet workloads over one wire.  This one wire concept is consistent with NetApp's heritage of Unified Storage.  More on this another time but I encourage you to read the Joint Cisco and NetApp white paper on The Simplified Data Center Architecture.

 

While NetApp is the first storage manufacture in the industry to take advantage of Native FCoE and all of the benefits which the Nexus 5000 architecture provides, we have many customers who are deploying the Nexus 5000 with traditional 10Gbps Ethernet workloads.   The Nexus 5000 operates the NX-OS (Nexus - Operating System) and the new OS has some differences in the command set versus the traditional IOS commands.

 

I get regular emails from customers asking how you enable a particular feature on the Nexus 5000.  One of our last posts was on the topic of Jumbo Frames and I provided some configuration guidance on how they are enabled.  I was reminded that many of today's new NetApp deployments are using Cisco Nexus and I should provide a quick update on how Jumbo Frames are enabled on this platform because it is a little different.

 

In the traditional IOS platforms you actually specify the MTU size on each port.  The Nexus is a little different in that you enable the policy at a global level. Once you have enabled this policy you are all set.

 

First specify the "system jumbo mtu" command.   The default is actually 9216 but if you have changed it for some reason and didn't realize this I instruct you to reset it just to make sure.

 

nexus(config)# system jumbomtu 9216

 

NOTE: The minimum is 2240 and many of you may wonder why.  I will provide some in depth detail on this in an upcoming post but 2240 is actually the size of a Fibre Channel frame.  The minimum of 2240 is restricted so that any Fibre Channel expansion modules you put in the system actually work.

 

If you are running NX-OS 4.0 perform the following commands.

 

nexus(config)# policy-map jumbo

 

nexus(config-pmap)# class class-default

nexus(config-pmap-c)# mtu 9216

nexus(config-pmap-c)# exit

nexus(config)# system qos

nexus(config-system)# service-policy jumbo

 


If you are running NX-OS 4.1 perform the following commands.

 

nexus(config)# policy-map type network-qos jumbo

 

 

nexus(config-pmap-nq)# class type network-qos class-default

nexus(config-pmap-c-nq)# mtu 9216

nexus(config-pmap-c-nq)# exit

nexus(config-pmap-c)# exit

nexus(config)# system qos

nexus(config-system)# service-policy jumbo

 

 

That's it you are done, jumbo frames are enabled.

 

I hope this helps,

Trey

 

 

 

Special Note:  This update post was prompted by a twitter post from @scott_lowe.  Scott was having problems after an upgrade and found that Cisco changed the configuration requirements for jumbo frames following the 4.1 release.  That prompted me to update this post as I know it is a reference to many.  Scott also posts a blog at http://blog.scottlowe.org

In the most recent post we discussed ethernet frames in detail.  In that post I provided a diagram depicting a NetApp FAS controller, configured with a VLAN trunk to a ethernet switch.  What was unique about this configuration was that the NetApp controller was supporting multiple VLANs, two of which had jumbo frames enabled and one using standard size frames.  A expanded version of that diagram is provided and this post will discuss the configuration required on the NetApp controller and on a switch running Cisco IOS.

 

post3diagram1.jpg

 

 

 

If you read the last post you will see that we have filled out the diagram with some additional information and we have introduced a redundant connection for the NetApp controller.  It is important to note that this configuration is focusing on the configuration of a single NetApp controller and ethernet switch for simplicity purposes.  In your production infrastructure you would likely include a second storage controller and ethernet switch with redundant connections from each server.

 

The Server Configuration

Each server in this particular example is connected via 10Gbps Ethernet.  These 10Gbps Ethernet connections are exclusively IP so it would be possible for these servers to communicate with our NetApp storage controller via iSCSI, CIFS, HTTP or NFS.  We will provide a FCoE example configuration in a future post but it is important to note that FCoE does not run over IP, it runs over Ethernet, more on that another time.

 

The NetApp Controller Configuration

This particular controller is one of our FAS3100 Series controllers.  These controllers include four PCIe expansion slots for I/O or intelligent cache (PAM) expansion. We have chosen to use NetApp’s X1008A-R6 dual port 10Gbps Ethernet Module.  For the purposes of this example we will only be using one of the 10Gbps ports on the expansion card.  We will also be using both of the onboard 1Gbps Ethernet ports. Our configuration will bond the two onboard 1Gbps ports into a LACP EtherChannel. We will configure the system to favor the 10Gbps Ethernet link and only use the LACP EtherChannel in the event that the 10Gbps connection has a loss of link.  In NetApp documentation this is referred to as a Second Level VIF.  The first level VIF will be the VIFs that we layer each physical interface into.  The Second level VIF will be the virtual interface that we layer each virtual interface into. 

 

 

post3diagram2.jpg

 

Controller RC File

# Manually Edited Controller RC File, by Trey Layton

hostname netapp01

 

vif create single ntapvif01 e1a

vif create lacp ntapvif02 -b ip e0a e0b

 

vif create single seclevelvif ntapvif01 ntapvif02

vif favor ntapvif01

 

vlan create seclevelvif 1 2 3

 

ifconfig seclevelvif-1 10.10.1.5 netmask 255.255.255.0 mtusize 9000

ifconfig seclevelvif-2 10.10.2.5 netmask 255.255.255.0 mtusize 9000

ifconfig seclevelvif-3 10.10.3.5 netmask 255.255.255.0 mtusize 1500

 

route add default 10.10.3.1 1

routed on

options dns.domainname demo.netapp.com

options dns.enable on

options nis.enable off

savecore

 

 

There are a few items to note about the configuration above. 

 

Second Level VIFs

If you research the second level VIF configurations in Data ONTAP documentation you will find the common use case for second level VIFs is when you have two multimode VIFs that you would like to use as active and passive between two physical switches.   While the documented example in ONTAP guides is a common use of second level VIFs I provided this configuration to demonstrate that you may also layer a single mode VIF into a second level VIF.  In this example we have actually layered one single mode vif and one dynamic multimode (LACP) VIF into the second level VIF. 

 

802.1q VLAN Trunking to NetApp

You will notice that no where in the configuration do we reference 802.1q, this is because NetApp only supports 802.1q VLAN trunking thus any VLAN created is automatically going to use the 802.1q VLAN tagging feature. 

 

The first step in enabling 802.1q is to create the VLANs that you wish to address on the controller.  The VLAN creation is local to the NetApp controller and is simply instructing the VIF to tag VLANs which are identified in the “vlan create command”.   You will notice that when we specify the VLAN create command we do so on the VIF we want these tagged frames to be transmitted out.  In our case it is our second level VIF as this is going to be our VLAN trunk.  The second level VIF feature will ensure that if the 10Gbps e1a interface goes down our traffic will float to the LACP interface and operations will continue.

 

The final piece for NetApp VLAN configuration is assigning the tag and IP address to the VIF.  You will notice that we have 3 ifconfig statements, each of which represent the configuration for each individual VLAN we are addressing on the controller.  The tag is created by immediately following the vif name with a hyphen and the VLAN number (ex: seclevelvif-10 (for VLAN 10)).

 

Default Route

A NetApp controller can only support one active default route.   This is actually common practice in systems as the default route is the last resort destination, if the address is not locally known.  When you attach a NetApp controller to a Network where the controller is addressed on multiple networks, you must consider how routing is working in the network.  In most cases what you will find is that you will have one network interface which you consider the publicly addressable interface for the controller.  This interface is usually the interface which is advertised in WINS or DNS as the target for any user to attach to the controller.  The other VLANs which are configured are typically purpose built VLANs for a dedicated NFS or iSCSI Network.   The intent for creating these isolated networks is to provide local direct access, in the data center for those protocols.   I mention this because when you specify your default route you want that default route to be on the network which you consider your publicly advertised interface.  If you require route definitions for the other VLANs addressed on the controller then you may configure static routes. Static routes will allow you to statically define a more direct path to the network destination.  Once you specify the static route that route will be used instead of the default route for that particular defined destination.  A future post will describe static routes in detail.

 

 

The Cisco IOS Configuration

The configuration for the switch is actually quite similar to the configuration defined in the Multimode VIF Survival Guide.  The difference in this configuration will be the specification of a Jumbo Frame MTU and the conversion of the switch ports from a standard access port to a VLAN trunked interface when connecting to the NetApp controller.

 

interface TenGigabitEthernet 1/1

description VLAN 2 Server

switchport

switchport mode access

switchport access vlan 2

spanning-tree portfast

mtu 9216

 

interface TenGigabitEthernet 1/2

description VLAN 3 Server

switchport

switchport mode access

switchport access vlan 3   
spanning-tree portfast

 

interface TenGigabitEthernet 1/3

description VLAN 1 Server

switchport

switchport mode access

switchport access vlan 1

##### NOTE: IOS defaults a port to VLAN 1 so this command is not necessary but placed here for simplicity.

spanning-tree portfast

mtu 9216

 

interface TenGigabitEthernet 1/4

description NetApp 10Gbps 802.1q Trunk

switchport

switchport trunk encapsulation dot1q

switchport trunk allowed vlan 1-3

switchport mode trunk

switchport nonegotiate

spanning-tree guard loop

spanning-tree portfast trunk

mtu 9216

 

interface Port-channel 1

description 2 Port LACP Channel to NetApp

switchport

switchport trunk encapsulation dot1q

switchport trunk allowed vlan 1-3

switchport mode trunk

switchport nonegotiate

spanning-tree guard loop

spanning-tree portfast trunk

mtu 9216

 

interface GigabitEthernet 2/1

description NetApp e0a 2x 1Gbps LACP 802.1q Trunk

switchport

switchport trunk encapsulation dot1q

switchport trunk allowed vlan 1-3

switchport mode trunk

switchport nonegotiate

spanning-tree guard loop

spanning-tree portfast trunk

channel-protocol lacp

channel-group 1 mode active

mtu 9216

 

interface GigabitEthernet 2/2

description NetApp e0b 2x 1Gbps LACP 802.1q Trunk

switchport

switchport trunk encapsulation dot1q

switchport trunk allowed vlan 1-3

switchport mode trunk

switchport nonegotiate

spanning-tree guard loop

spanning-tree portfast trunk

channel-protocol lacp

channel-group 1 mode active

mtu 9216

 

 

 

IOS Configuration Notes

The above configuration is very consistent with those that you find in the MultiMode VIF Survival Guide.  There are a few items that we want to point out specifically. 

 

switchport trunk allowed vlan 1-3 - This configuration is a optional configuration parameter.  It is one that I always use as the 802.1q specification supports 4096 VLANs and within the infrastructure in your own network you likely have in production more VLANs than you are enabling on the NetApp controller.  This command instructs the Cisco switch to only allow traffic from VLANs 1-3 over this link.  This prevents broadcasts from the other VLANs in your network consuming bandwidth on your connections to your storage array.  It should be noted that if you ever introduce a new VLAN to the NetApp then you must modify this parameter to ensure that VLAN is passed to the NetApp.

 

spanning-tree portfast trunk - Portfast is a feature which is used for non-infrastructure network devices to prevent the physical switch port from having to calculate spanning-tree when link is negotiated.  Spanning-Tree calculations are done to prevent loops in the switching topology and the process by which a port goes through spanning-tree actually will block traffic until the calculation is complete.  The time it takes to calculate spanning-tree ranges from a minimum of 30 seconds to as much as 2 minutes.  In a controller failover event you do not want spanning-tree to prevent access to the storage controller so I advise you to always use this parameter when connecting devices that are not acting as switches or ethernet bridges. 

 

In most cases you would simply type the command “spanning-tree portfast”.  However, the interfaces we have defined in this configuration are 802.1q VLAN trunks. In the past it was expected that a VLAN trunk was connected to another switch therefore there was not an ability to enable portfast for those connections.  Since it has become common practice for servers and other endpoints to use VLAN trunking to switches, Cisco introduced the portfast trunk command to accommodate fast activation of ports which were 802.1q trunks to non-switching platforms.

 

 

 

Summary

This completes a in depth configuration example of using 802.1q VLAN trunking with mixed Jumbo and Standard Frames using second level VIFs.

 

I hope this helps in your deployment efforts,

Trey

Anatomy of an Ethernet Frame

Posted by treyl Sep 12, 2009

It has been a while since I have posted a update to the blog and for that I apologize.  NetApp has been a very busy place for the last few months.  I want to thank the many readers of the “MultiMode Survival Guide”  because you made it the most visited page in the entire communities for a period of time.  I have received many emails requesting different topics to cover in upcoming posts and will begin to finally get those posts published.   Please keep the comments and ideas coming as I will work to get the topics covered that are most important to you.

 

The topic for this post is pure, basic Ethernet with a sprinkle of Jumbo Frames.  The post is important because as our Ethernet Storage environments expand, certain technologies become critical for performance and functionality.   To enable some of these features there will be best practices to adhere to and without a basic understanding of Ethernet, some of those best practices won’t be completely appreciated.

 

The first area to cover is simply proper terminologies.  It is important to understand the difference between a Frame and a Packet. 

 

A Ethernet Frame is a physical layer communications transmission, comprised of 6 fields which are assembled to transmit any higher layer protocol over an ethernet fabric.

 

A IP Packet is a formatted unit of data which can be transmitted across numerous physical topologies including Ethernet, Serial, SONET and ATM. 

 

The important thing to understand is that a frame always refers to the physical medium.  References to enhancements in the physical medium like larger than standard ethernet frames are called “Jumbo Frames”, not “Jumbo Packets”

 

A packet is reference to a message transmission which is compiled in software and delivered to a physical medium so that it can be transported.   An example of this includes the means by which you are viewing this post.  You may be reading this from your laptop at home which is connected to your home network via wireless ethernet. You then may have a DSL or Cable modem which communicates via Ethernet on your home network and by Cable or DSL through the interface that exits your home premise.  That connection is typically collapsed into some type of optical switch in your Internet carriers network infrastructure, later transmitted through your carriers network core likely via High Speed routers connected at OC-192, which is 10Gbps SONET.  Ultimately you arrive at a NetApp data center in either RTP or Sunnyvale where your request traverses Ethernet on one of NetApp’s community web servers.  The entire path of communications transmitted packets that were ultimately several HTTP-GET requests to view the text you are now reading. 

 

You may be reading this and saying, why be so specific?  The reason is simply communication. If a storage expert is to effectively collaborate with a network expert then the proper terminologies need to be used so the collaboration is effective. 

 

To discard this would be like a network engineer coming to you, the storage engineer and stating that they need their network management application installed on a share which was Fibre Channel mounted to RAID set of 15K RPM SATA disks. 

 

If you received such a request as a storage engineer you would look at the person asking and quickly assess that the person asking for this had no clue about what they were asking for.

 

So, the lesson here is don’t go talk to a network guy about needing a network with Jumbo Packets and routed IP frames.  -- Those things don’t exist...

 

In the spirit of complete communication there are a few different types of Ethernet frames, these different frame types were used when multi-protocol networks were common in the 80s and 90s.   Many of you remember the days of Novell networks and the IPX protocol or DEC systems using DecNet.   The official IEEE 802.3 Ethernet Standard Frame is depicted below and it is the frame type that our Ethernet Storage Networks will use.

 

diagram1.jpg

 

A IEEE 802.3 Ethernet Frame is composed of 6 segments which are described in detail below.

 

Preamble

The 802.3 specification divides the preamble into two sections. The first section is a 56 bit (7 byte) field plus a 1 byte field called the starting frame delimiter (SFD).  The preamble is not typically used in modern ethernet networks as it’s function is to provide signal startup time for 10Mbps ethernet signals.  Modern 100Mbps, 1000Mbps or 10Gbps ethernet use constant signaling, which avoids the need for the preamble. 

 

The preamble is preserved for todays Ethernet transmissions speeds to avoid making any changes to the Ethernet frame format.  The preamble, while listed as apart of the actual ethernet frame is technically not part of the frame as it is added to the front of the frame by the NIC just before the frame is put on the wire.  The start of frame delimiter is a 1 byte field that serves as a signal to the NIC that the data immediately following the SFD is the beginning of the actual frame.

 

Destination and Source Address

These two sections of the frame are likely the most commonly understood in that they contain the MAC address for the source “transmitting system” and the destination “target system”.  In the Multimode VIF survival guide we discuss how these addresses are modified as a frame traverses a network.

 

Type / Length

The type / length field is used to identify what higher-level network protocol is being carried in the frame (example: TCP/IP)

 

Data / Payload

The data / payload field is what we typically consider most important as it is the data in which we are transmitting.  The diagram specifies a range between a minimum of 46 bytes and maximum of 1500 bytes.  A standard ethernet frame has a maximum payload of 1500 bytes, frames over 1500 bytes are considered Jumbo Frames. 

 

Frame Check Sequence (CRC)

The end of the frame contains a 32 bit field which is a Cyclic Redundancy Checksum (CRC). This is a mechanism to check the integrity of a frame upon arrival at it’s destination.  The CRC is generated by applying a polynomial to bits which make up the frame at transmission.  This same polynomial is used at the receiving station to verify the contents of the frame have not changed in transmission.

 

Standard Frame Summary

As you can see from the diagram of the a standard frame the maximum size is 1518 bytes as this number includes all fields in the frame.   Another important note is that we have also specified a minimum payload size of 46 bytes.  The minimum 46 bytes plus 18 bytes of signaling equals a total 64 byte frame.  A 64 byte frame is what most network equipment manufactures will use as a benchmarking frame.  In many specifications you will see that a router or switch can forward 10 million frames per second, this is based on the 64 byte frame size.  This size is used because the smaller the frame size the less latency required to forward the frame.  A very important item to note is that as frame sizes increase so does latency in most switching platforms.  One exception to this is the newest ethernet hardware introduced primarily for the data center networking market place.  These data center class switches, like the Cisco Nexus 5000 series can actually forward any frame size with no increase in latency.

 

Introduction of the 802.1q VLAN tag

VLAN tagging has traditionally been used to interconnect infrastructure routers and switches.  Through these VLAN tagged connections multiple VLANs can be transported across a single logical or physical link.  In recent years it has become very popular to connect end-points using VLAN trunking and have a endpoint exist in multiple VLANs.  An example of this includes using a NetApp FAS controller directly connected to a ethernet switch, using VLAN tagging to isolate different protocols or users to specific VLANs.  When VLAN tagging is introduced the size of the Ethernet frame expands to accommodate the VLAN tag.   The following diagram depicts our ethernet frame used in the previous example, except that it now depicts the presence of a VLAN tag.

 

diagram2.jpg

 

The 802.1q VLAN tag introduces a 4 byte tag which follows the source address field in the frame.  This tag is separated into two, 2 byte segments.  The first 2 byte segment is the Tag Protocol Identifier (TPID).  The value in the 2 byte field is always 0x8100 which simply identifies the frame as a IEEE 802.1q tagged frame.   The second 2 byte segment is the Tag Control Information (TCI).  The tag control information field is further segmented by the first 3 bits being used to carry priority information based on the values defined in the 802.1p standard.  This field ensures that 802.1p priorities can be extended to VLANs by providing space within the tag to indicate traffic priorities. The remaining bits in the TCI contain the VLAN identifier (VID) which provides the means to uniquely identify the VLAN to which the frame is a member of. 

 

Introducing the VLAN tag expands the maximum standard frame size to 1522 bytes.

 

Bringing It Together

I provided this specific detail around a standard frame size so that I may begin to explain the problems in casual or loose definitions around frame sizes that you find in documentation from many different sources.  Many documents and some engineers will refer to the maximum standard frame size as being 1500 bytes.  What is missing from this description is that the document or the person is actually describing the maximum standard frame payload

 

Another item which fosters confusion is the operating systems which we use to operate our our laptops, desktops, servers, virtualization platforms and yes storage arrays.  When you are asked to specify a MTU or Maximum Transmission Unit size, the operating system is always asking you to specify the payload.  This is not stated explicitly but if your perform a packet capture you will find that operating systems simply append or add the signaling information to the payload you specify.  

 

I have provided the following example by doing a screen capture of the ethernet configuration on my MacBook Pro.

 

diagram3.jpg

 

As you can see the MTU is specified as 1500 bytes.  The Snow Leopard operating system will append the additional 18 bytes of signaling information for each frame which is transmitted from the system.

 

Lets take a look at a configuration of a NetApp FAS controller.  As you see from the picture below we have a multimode vif named netapp01-mm that is configured with a standard MTU of 1500.  Data ONTAP appends the 18 bytes of signaling information for each frame transmitted from the the Storage Controller.  If we were to enable VLAN trunking on this multimode interface then Data ONTAP would append 18 bytes plus 4 bytes for the VLAN tag, for each transmission from the 802.1q VLAN tagged interface.

 

diagram4.jpg

 

Why it Matters

In short if you are using a standard frame network all of the previous information is just a little exercise for some brain cells. If you are using Jumbo Frames it is essential that you understand this so that you can construct a network to transport Jumbo Frames optimally.

 

So, lets start with a new diagram depicting our Jumbo Frame, without VLAN tagging

.

 

jumbo.png

 

 

 

Now lets show a diagram of a Jumbo Frame with VLAN tagging.

 

jumbotag.png

 

 

As you can see our frame size for a Jumbo Frame is 9018 bytes and when we introduce a VLAN tag it is 9022 bytes.  Let me remind you of a common instruction given when you are constructing a Jumbo Frame enabled network. 

 

     “With Jumbo Frames you must standardize everything to the same frame size.” 

 

This is inaccurate advice but that is initially hard to understand so lets attempt to follow the advice and find the flaw.

 

To begin constructing this Jumbo Frame enabled infrastructure we would likely do the following.

 

1.) Configure Storage Array with a 9000 byte MTU

2.) Configure Server with a 9000 byte MTU

3.) Configure Ethernet Switch with a 9000 byte MTU

4.) Configure Router Interface (if required) with a 9000 byte MTU

 

In our examples we provided regarding operating systems we mentioned that your only configuration option was to configure the payload.  We didn’t discuss network equipment yet because network equipment requires you to specify the actual maximum transmission unit size. That maximum for network equipment is inclusive of the all signaling and payload. 

 

Let me be extra clear in that I am primarily referencing Data Center Class Equipment from manufactures like Cisco Systems.  There are equipment manufactures which do operate similar to the way in which the operating system examples provided do but this type of equipment is generally not data center class networking gear. 

 

If you do not specify the proper maximum MTU setting to accommodate the payload and signaling on your network infrastructure you will produce performance problems.  Those problems present themselves in the following scenario.   A server will transmit a 9000 byte payload frame, upon transmission that frame will have the 18 bytes of signaling information appended to it.   The total frame size transmitted to your network gear in a non VLAN trunked transmission is 9018 bytes.  If your network gear is not configured to accept a 9018 byte frame it will discard the transmission and if MTU discovery is enabled the systems will negotiate the payload size of each future frame transmission to a smaller value.  If MTU discovery is not possible then the transmission will simply fail.  The process of negotiating a smaller payload to accommodate signaling increases latency and system CPU utilization as the system is forced to continually negotiate smaller payload sizes.   

 

Understanding this problem we now can establish some configuration specific guidelines to help you avoid these problems and build a functional and efficient Jumbo Frame enabled infrastructure.

 

The following diagram illustrates the configuration requirements for a network infrastructure which has enabled Jumbo Frames for two server VLANs, while still supporting standard frames for a legacy server VLAN and connecting a NetApp FAS controller via a VLAN Trunk.

 

 

diagram7.jpg

 

 

You will note that I have drawn arrows to switch ports and the MTU sizes that should be set on those ports.  The standard MTU interface 1/2 can be left to defaults as no change to the MTU would be required.  However, we are showing VLAN 1 and VLAN 2 as being Jumbo Frame enabled VLANs.  The switch ports which connect to these servers must be set at a minimum of 9018 bytes to accommodate the Jumbo Frame payload plus signaling.  Also note the interface which is associated with our NetApp FAS controller.  The NetApp FAS controller is connected via a 802.1q VLAN trunk and that trunk is transporting both Jumbo Frames for VLANs 1-2 and Standard Frames for VLAN 3. We have set the interface to support at MTU of 9022 which will accommodate the VLAN tagged Jumbo Frames being transmitted from the storage controller.  What is important to understand is that the setting on a switch for MTU or Maximum Transmission Unit size is literally what the name describes.  This specifies the “maximum”, therefore by specifying the 9022 MTU that setting will accommodate the required 1522 VLAN tagged standard frame size for VLAN 3. 

 

The previous example is meant to provide an illustration but the best practice is to set the infrastructure switches to the largest frame size the equipment will support.  If you research different Cisco switching models and different line-cards for chassis based ethernet switches you will find varying maximum sizes.  The data center class equipment which you should be using to connect Ethernet Storage infrastructures will generally accommodate a maximum size of 9216 bytes. 

 

With that you can use the following rule to live by when constructing Jumbo Frame enabled networks. 

 

a.) Standardize the payload MTU of all devices which are transmitting Jumbo Frames

b.) Use the maximum MTU setting on devices which transport Jumbo Frames

c.) Ensure that the maximum MTU setting on your devices which transport Jumbo Frames will accommodate your standardized payload size

 

This very simple guidance will ensure that you have optimized your network to support Jumbo Frames. 

 

Hope this gives you some basics on ethernet that helps you in the deployment of your next high performance Ethernet Storage infrastructure.

 

 

Thanks, Trey

Multimode VIF Survival Guide

Posted by treyl Apr 4, 2009

This is my first post to this NetApp community blog and as this month marks my 1 year anniversary at NetApp I felt compelled to post something I feel will be valuable to many as it is the most frequent topic of conversation with clients and colleagues.   I have spent nearly all of my technology career as the network guy.  A few years ago I ventured out on a path to be a virtualization and storage guy.  What has been amazing is that the deeper I get into the later (virtualization and storage) I call upon the skills of the former (networking).  My networking experience has afforded me a clear understanding of the correct ways to construct a high performance Ethernet fabric to support Ethernet Storage.   A technology that nearly everyone has deployed to increase performance on the Ethernet Fabric is MultiMode VIFs.

 

A MultiMode VIF is NetApp ease for EtherChannel or Port Channeling.   Quite simply, the bonding of physical links into a virtual link, to which the virtual link utilizes a algorithm to distribute or load-balance traffic across the physical links.

 

The first subject to tackle is NetApp terminology versus Networking Industry Terminology.  At NetApp we tend to generalize MultiMode VIF into a all encompassing description of channeling.    Some of us refer to MultiMode VIFs as "trunked ports".   This is an inaccurate description, yet I understand why the term is used.   When referring to a "trunked interface" the networking industry thinks of that as a VLAN trunked interface, utilizing a technology like 802.1q.   Therefore,  when I refer to the technology enabled by MultiMode VIFs I will always call the physical links EtherChanneled or Channeled interfaces.

 

The next thing we tend to do is never reference the other type of Multimode VIF, that is the "Dynamic MultiMode VIF".  Many of you reading that term for the first time are going to say what is he talking about.   Take a look at our OnTap Network Management Guides for virtually any release and browse the section on MultiMode VIFs.  You will see two distinct types of MultiMode VIFS. The Static MultiMode VIF and the Dynamic MultiMode VIF.  These two different types of MultiMode VIFs are key differentiators and knowing what each is will help you in those conversations with the Networking team in your organization.

 

Static MultiMode VIF -  A static MultiMode VIF is a static EtherChannel.  A static channel is quite simply the static definition of physical links into the channel.  There is no negotiation or auto detection of the physical ports status or ability to be channeled.  The interfaces are simply forced to channel.

 

The Cisco command to enable a static etherchannel is channel-group (#) mode on

The NetApp command to enable a static MultiMode VIF is vif create multi

 

** Covered in detail in the templates below

 

Dynamic MultiMode VIF - A Dynamic MultiMode VIF is a LACP EtherChannel.   LACP is short for Link Aggregation Control Protocol and is the IEEE 802.3ad standard for port channeling.   LACP provides a means to exchange PDUs between devices which are channeled together.   In the case of the present topic that is a NetApp controller and Cisco switch.   The only difference between the two types are the use of PDUs to alert the remote partner of interface status.  This is used when one partner (lets say a switch) decides it is going to remove one physical interface from the channel, for reasons other than the link being physically down.  If the switch removes a physical interface from the channel, with LACP, a transmission from the switch to the partner (NetApp controller) is sent, providing notification that the link was removed. This allows the controller to respond and also remove that link from the channel thus not creating a situation where the controller attempts to continue to use that link, causing certain transmissions to be lost.   Static EtherChannels do not have this ability and if a situation like this occurs, the only means to remove the link from the channel is via configuration change,  cable removal or adminstrative shutdown of the port.

 


I provide the above distinctions because I find that many often interchange terms Static Multimode and LACP. This can produce problems in the configuration of the network to support the controllers so try to stick with the terminology above.

 

The next thing I often see is a conversation around which technology LACP VIFs or Static VIFs provide better load-balancing.   The truth his they are simply the same, there is no performance benefit provided by one over the other.  This generally leads to the topic of load-balancing as we often find that not everyone understands the mechanism provided by the current load-balancing algorithms for LACP and Static VIFs.   There are limitations to the technology and understanding those limitations are key to getting the most out of the deployment when utilizing them.

 

The Cisco command to enable a static etherchannel is channel-group (#) mode active,  channel-protocol lacp

The NetApp command to enable a dynamic MultiMode VIF is vif create lacp

 

** Covered in detail in the templates below

 

 

Load-Balancing in VIFs can utilize 1 of 3 choices of algorithms   IP, MAC and Round Robin.

 

 

 

Round Robin

 

I am personally not a fan of Round Robin load-balancing as I used this algorithm in the early 90s, when a majority of networking manufactures were first introducing EtherChannel based features.   This technology runs the risk of packets arriving out of order and has nearly been eliminated from most network manufactures equipment features, for that reason.   However,  there are still deployments in production which utilize this feature and they work without issue.  Round Robin essentially oscillates ethernet frames over the active links in the channel.  This provides the most even distribution of transmission but can produce a situation where frame 1 is transmitted on link 1 and frame 2 is transmitted on link 2,  frame 2 could arrive at the destination prior to frame one because of congestion experienced by frame 1 while in transit.   This would produce a condition where errors would occur and the protocol and application would need to facilitate a recovery which typically results in the frames being retransmitted.

 


Source and Destination Pairs

 

The load-balancing algorithm used in most NetApp MultiMode VIF deployments are detailed in the sections that follow but one thing they have in common is that they calculate the interface to be used by executing a XOR algorithm on source and destination pairs.  As source and destination pairs are compared they are ultimately divided by the number of physical links in the VIF.  This calculation equals a result which is matched to one of the physical interfaces.  It is important to understand this as many people assume that bonding 4 physical links together enables a speed equal to the sum of the links.  This is not true, the maximum speed that can be reached on a EtherChannel link is equal to the speed of one physical link in the channel, not the sum.  Utilizing an example of a connection which contains 4 1Gbps physical links bonded into a MultiMode VIF.  It is often assumed that this would equal 4Gbps of bandwidth to the controller.   It actually equals 4 - 1Gbps links to the controller.  A single transmission (source and destination pair) can burst up to the speed of one of the physical links (1Gbps).  No one single communication can exceed the 1Gbps speed.

 

The following sections will describe how the algorithms work.

 

NOTE: The algorithms defined herein are industry limiations and are the same no matter who is the manufacture.  Cisco has implemented a few additional algorithms but none get over the core limitations of not being able to exceed the speed of a given physical link in the channel.

 

MAC Load-Balancing

 

This is the least-common algorithm utilized because of conditions which produce the likelihood that traffic distribution will be weighted heavily to a single link.   The MAC based algorithm makes a XOR calculation on the source and destination pair of the MAC addresses in a communication.  The source would be the MAC address of the NIC card on the host connecting to the NetApp controller.  The destination MAC address would be the MAC address of the VIF interface on the NetApp controller.  This algorithm works well if the hosts and NetApp controller reside on the same subnet or VLAN.   When hosts reside on a different subnet, than the NetApp controller, we begin to exploit the weakness in this algorithm.  To understand the weakness you must understand what happens to a Ethernet Frame as it is routed through a network.

 

Lets say we want Host1 to connect to Filer1.

 

Host1's IP address is 10.10.1.10/24 (Host1's default router is 10.10.1.1)

Controller1's IP address is 10.10.3.100/24 (Controller1's default router is 10.10.3.1)

 

Above we have defined Host and Filer on two separate subnets.  The only way they can communicate with each other is by going through a router.  In the case of the example above, default router 10.10.1.1 and 10.10.3.1 are actually the same physical router, those addresses are simply two physical interfaces on the router.  The routers purpose is to connect networks and allow communication between subnets.

 

As Host1 transmits a frame destined for Controller1 it compiles a frame to its default router because it recognizes that 10.10.3.100 is an IP address not on it's local network, therefore it forwards the frame to it's default router so that it can be forwarded to that destination.

 

Host1 to Host1Router

 

-IP Source: Host1 (10.10.1.10)

-MAC Source: Host1

-IP Destination: VIFController1 (10.10.3.100)

-MAC Destination: Host1DefaultRouter

 

Host1Router Routing Packet to Controller1

 

-IP Source: Host1 (10.10.1.10)

-MAC Source: Controller1DefaultRouter

-IP Destination: VIFController1 (10.10.3.100)

-MAC Destination: VIFController1

 

NOTE:  The source and destination MAC addresses changed as the frame was forwarded through the network.  This is how routing works as routers exist between source and destination the MAC address can change multiple times.   How many times is not of concern but specifically when the frame is forwarded onto the local segment of the Controller.  The source MAC will always be the router and the destination MAC will always be the controller VIF.  If the source and destination pair is always the same then you will always be load-balanced to one link.  To fully understand how this creates a problem, lets say that we have a 4 - 1Gbps Etherchannel on the Controller1.   Lets also say that we have 50 other hosts on the same subnet as Host1.   The source and destination pair for Host1 to Controller1 is the exact same for every other host on host1's network as the source and destination MAC address will always be Controller1DefaultRouter and VIFController1.

 

 

IP Load-Balancing

 

IP Load-Balancing is the default parameter for all NetApp MultiMode VIFs and is the most common type of MultiMode VIF in production today.  The algorithm is no different than the MAC algorithm defined above.  The difference is that we are using Source and Destination IP Addresses, which if you go back through the example above you will note that the source and destination IP addresses never change, unlike MAC addresses.  The fact that the IP addresses never change produces the scenario where you are more likely to have more unique pairs which will result in a more equal distribution of traffic across the physical links.

 

It is important to understand one final thing about source and destination IP pairs, that is the last octet of the IP address is the only factor used in caclulating the source and destination pair.  This would mean that IP Source 10.10.1.10 (only uses the 10 - last octet) and IP Destination 10.10.3.100 (only uses the 100 - last octet).    It is important to be aware that the last digits in the IP address are used for the calculation so that if you deploy hosts on multiple subnets the hosts with the same last octets will be transmitted on the same physical links.

 

 

IP Aliasing

 

Understanding Load-Balancing Algorithms allow you as an administrator to exploit them to your benefit.  All NetApp VIFs and physical interfaces have the ability to have an alias placed on the interface.  This is simply a additional address on the VIF itself.  I always consult with customers to place addresses (VIF + number of aliases) equal to the number of physical links in the EtherChannel between the Controller and the switch to which the controller is attached.  Therefore, if you have a 4 1Gbps MultiMode VIF between a Controller and Switch then place one address on the VIF and three aliases on that same VIF.

 

Simply placing the additional addresses will not exploit the advantage of additional addresses.  You must ensure that the hosts which mount data from the NetApp controllers utilize all of the addresses.   This can be achieved by a few different ways, depending on the protocol being utilized for storage access below are a few NFS examples.

 

Oracle NFS -  Oracle Hosts should mount NFS volumes by evenly distributing NFS mounts across the available Controller IP address.   If there are 4 different NFS mounts then mount the four via the four different IP address on the Controller.   Each mount will have a different source and destination pair and the communication from the host to controller will be efficiently utilized.

 

VMware NFS - ESX Hosts should mount each NFS Datastore via different IP addresses on the NetApp Controller. It is perfectly fine to utilize a single VMkernel interface (the source address) as long as you are mounting each datastore with different IP addresses on the Controller.  If you have more datastores than IP addresses then simply distribute the datastore mounts evenly across the IP addresses on the Controller.

 

Final note about aliases:  When administrators configure physical interfaces on NetApp controllers they typically partner those interfaces with the other controllers interfaces.  This ensures that failover of a controller will move the failed controllers interfaces to the surviving controller.  Anytime you place an alias on an interface, if you have partnered the physical, the aliases WILL travel to the clustered controller in failover. You do not partner the aliases, if the physical has already been partnered.

 

 

Finally the templates:

 

 

LACP - Dynamic MultiMode VIF

____________________________________

Filer RC File

 


#Manually Edited Filer RC file  3 March, 2009,  by Trey Layton

 

hostname filer a

 

vif create lacp template-vif1 -b ip e0a e0b e0c e0d

 

ifconfig template-vif1 10.10.3.100 netmask 255.255.255.0 mtusize 1500 partner (partner-vif-name)
ifconfig template-vif1 alias 10.10.3.101 netmask 255.255.255.0
ifconfig template-vif1 alias 10.10.3.102 netmask 255.255.255.0
ifconfig template-vif1 alias 10.10.3.103 netmask 255.255.255.0

 

route add default 10.10.3.1 1
routed on
options dns.domainname template.netapp.com
options dns.enable on
options nis.enable off
savecore

 

_____________________________________

Cisco Configuration

 

 

!!!!!!     The following interface is a virtual interface for the etherchannel.  This interface must be referenced
!!!!!!      on the physical interface to create the channel.

 


interface Port-channel 1
  description Virtual Interface for Etherchannel to filer
  switchport

  switchport mode access

  switchport nonegotiate
  spanning-tree guard loop

  spanning-tree portfast
!

 


!!!!!  The following are the physical interfaces in the channel.  The above is the virtual interface for the channel.
!!!!!  Each physical interface will reference the virtual interface.

 


interface GigabitEthernet 2/12
description filer interface e0a
switchport
switchport mode access

switchport nonegotiate
flowcontrol receive on

no cdp enable
spanning-tree guard loop
spanning-tree portfast
channel-protocol lacp
channel-group 1 mode active

 

!!!!!!
!!!!!!  The above channel-group command is the command which bonds the physical interface to the virtual interface
!!!!!!  previously created.  The command following the channel number is the mode - active is for LACP.
!!!!!!
!
interface GigabitEthernet 2/13
  description filer interface e0b
  switchport
  switchport mode access

switchport nonegotiate
  flowcontrol receive on

no cdp enable
  spanning-tree guard loop
  spanning-tree portfast
  channel-protocol lacp
  channel-group 1 mode active


!
interface GigabitEthernet 2/14
  description filer interface e0c
  switchport
  switchport mode access

switchport nonegotiate
  flowcontrol receive on

no cdp enable
  spanning-tree guard loop
  spanning-tree portfast
  channel-protocol lacp
  channel-group 1 mode active


!
interface GigabitEthernet 2/15
  description filer interface e0d
  switchport
  switchport mode access

switchport nonegotiate
  flowcontrol receive on

no cdp enable
  spanning-tree guard loop
  spanning-tree portfast
  channel-protocol lacp
  channel-group 1 mode active

 

 

 

 

Static EtherChannel - Static MultiMode VIF

____________________________________

Filer RC File

 

#Manually Edited Filer RC file  3 March, 2009,  by Trey Layton

 

hostname filer a

 

vif create multi template-vif1 -b ip e0a e0b e0c e0d

 

ifconfig template-vif1 10.10.3.100 netmask 255.255.255.0 mtusize 1500 partner (partner-vif-name)
ifconfig template-vif1 alias 10.10.3.101 netmask 255.255.255.0
ifconfig template-vif1 alias 10.10.3.102 netmask 255.255.255.0
ifconfig template-vif1 alias 10.10.3.103 netmask 255.255.255.0

 

route add default 10.10.3.1 1
routed on
options dns.domainname template.netapp.com
options dns.enable on
options nis.enable off
savecore

_____________________________________

Cisco Configuration

 

!!!!!!     The following interface is a virtual interface for the etherchannel.  This interface must be referenced
!!!!!!      on the physical interface to create the channel.

 


interface Port-channel 1
description Virtual Interface for Etherchannel to filer
switchport

switchport mode access
switchport nonegotiate
spanning-tree guard loop

spanning-tree portfast
!


interface GigabitEthernet 2/12
description filer interface e0a
switchport
switchport mode access
switchport nonegotiate
flowcontrol receive on

no cdp enable
spanning-tree guard loop
spanning-tree portfast
channel-group 1 mode on

 

!!!!!!
!!!!!!  The above channel-group command is the command which bonds the physical interface to the virtual interface
!!!!!!  previously created.  The command following the channel number is the mode - active is for LACP.
!!!!!!
!
interface GigabitEthernet 2/13
  description filer interface e0b
  switchport
  switchport mode access
  switchport nonegotiate
  flowcontrol receive on

no cdp enable
  spanning-tree guard loop
  spanning-tree portfast
  channel-group 1 mode on


!
interface GigabitEthernet 2/14
  description filer interface e0c
  switchport
  switchport mode access
  switchport nonegotiate
  flowcontrol receive on

no cdp enable
  spanning-tree guard loop
  spanning-tree portfast
  channel-group 1 mode on


!
interface GigabitEthernet 2/15
  description filer interface e0d
  switchport
  switchport mode access
  switchport nonegotiate
  flowcontrol receive on

no cdp enable
  spanning-tree guard loop
  spanning-tree portfast
  channel-group 1 mode on