I was asked not so long ago what is the difference between vPC and VSS? For the love of God, I could not even recall, from the top of my head, what VSS even is! Brain freeze and a classic example of what happens in our constantly changing tech world if we do not work with specific technologies on a regular basis. We may know what it is, we may have worked with it and can work with it again but imply cannot get everything correct from the top of our heads if we have not seen it in it, in, say, 4 odd months’ time. Hence, since then I have been thinking of actually compiling the basics of vPC, VSS, VFC etc. as they relate to Data Center design.
Next-Generation Data Centers
The next-generation data center, as professed by Cisco, provides the ability to use all links in the LAN topology by taking advantage of technologies such as virtual PortChannels (vPCs). vPCs enable full, cross-sectional bandwidth utilization among LAN switches, as well as between servers and LAN switches.
Cisco Unified Fabric:
What IS it? Unified Fabric is a data center network that supports both traditional LAN traffic and all types of storage traffic, including traditional non-IP-based protocols such as Fibre Channel and IBM Fibre Connection (FICON), tying everything together with a single OS (Cisco NX-OS Software), a single management GUI, and full interoperability between the Ethernet and non-Ethernet portions of the network.
Unified Fabric creates high-performance, low-latency, and highly available networks. These networks serve diverse data center needs, including the lossless requirements for block-level storage traffic. A Unified Fabric network carries multiprotocol traffic to connect storage (Fibre Channel, FCoE, Small Computer System Interface over IP [iSCSI], and network-attached storage [NAS]) as well as general data traffic. Fibre Channel traffic can be on its own fabric or part of a converged fabric with FCoE. Offering the best of both LAN and SAN environments, Unified Fabric enables storage network users to take advantage of the economy of scale, robust vendor community, and aggressive performance roadmap of Ethernet while providing the high-performance, lossless characteristics of a Fibre Channel network.
Fibre Channel over Ethernet (FCoE) is one of the enabling technologies of Unified fabric design.
Fibre Channel over Ethernet
Fibre Channel over Ethernet (FCoE) represents the latest in standards-based I/O consolidation technologies. FCoE was approved within the FC-BB-5 working group of INCITS (formerly ANSI) T11. The beauty of FCoE is in its simplicity. As the name implies, FCOE is a mechanism that takes Fibre Channel (FC) frames and encapsulates them into Ethernet.This simplicity enables for the existing skillsets and tools to be leveraged while reaping the benefits of a Unified I/O for LAN and SAN traffic.
FCoE provides two protocols to achieve Unified I/O:
· FCoE: The data plane protocol that encapsulates FC frames into an Ethernet header.
· FCoE Initialization Protocol (FIP): A control plane protocol that manages the login/logout process to the FC fabric.
FCoE standards also defines several new port types:
· Virtual N_Port (VN_Port): An N_Port that operates over an Ethernet link. N_Ports, also referred to as Node Ports, are the ports on hosts or storage arrays used to connect to the FC fabric.
· Virtual F_Port (VF_Port): An F_port that operates over an Ethernet link. F_Ports are switch or director ports that connect to a node.
· Virtual E_Port (VE_Port): An E_Port that operates over an Ethernet link. E_Ports or Expansion ports are used to connect fibre channel switches together; when two E_Ports are connected the link, it is said to be an interswitch link or ISL.
VFC (Virtual Fiber Channel)
Cisco Nexus devices support Fibre Channel over Ethernet (FCoE), which allows Fibre Channel and Ethernet traffic to be carried on the same physical Ethernet connection between the switch and the servers.
The Fibre Channel portion of FCoE is configured as a virtual Fibre Channel interface. Logical Fibre Channel features (such as interface mode) can be configured on virtual Fibre Channel interfaces.
A virtual Fibre Channel interface must be bound to an interface before it can be used. The binding is to a physical Ethernet interface (when the converged network adapter (CNA) is directly connected to the Cisco Nexus device), a MAC address (when the CNA is remotely connected over a Layer 2 bridge), or an EtherChannel when the CNA connects to the Fibre Channel Forwarder (FCF) over a virtual port channel (vPC).
Cisco guidelines when configuring FCoE VLANs and Virtual Fiber Channel (vFC) Interfaces:
Each vFC interface must be bound to an FCoE-enabled Ethernet or EtherChannel interface or to the MAC address of a remotely connected adapter.
FCoE is supported on 10-Gigabit Ethernet interfaces. The Ethernet or EtherChannel interface that binds to the vFC interface must be configured as follows:
· The Ethernet or EtherChannel interface must be a trunk port (use the switchport mode trunk command).
· The FCoE VLAN that corresponds to a vFC’s VSAN must be in the allowed VLAN list.
· You must not configure an FCoE VLAN as the native VLAN of the trunk port.
Fabric extender technology simplifies the management of the many LAN switches in the data center by aggregating them in groups of 10 to 12 under the same management entity. In its current implementation, Cisco Nexus 2000 Series Fabric Extenders can be used to provide connectivity across 10 to 12 racks that are all managed from a single switching configuration point, thus bringing together the benefits of top-of-the-rack and end-of-the-row topologies.
Using Cisco Nexus Switches and vPC to Design Data Centers
In a vPC topology, all links between the aggregation and the access layer would be forwarding and part of a virtual PortChannel.
Gigabit Ethernet connectivity would make use of the fabric extenders concept. Spanning Tree Protocol does not run between the Cisco Nexus 5000 Series Switches and the Cisco Nexus 2000 Series Fabric Extenders. Instead, proprietary technology keeps the topology between the Cisco Nexus 5000 Series and the fabric extenders free of loops.
Connectivity from blade servers or servers connected through 10 Gigabit Ethernet can take advantage of the Cisco Nexus 5000 Series line-rate 10 Gigabit Ethernet forwarding capabilities and copper twinax support. Blade server connectivity to the Cisco Nexus 5000 Series can vary based on the type of switching device placed inside the blade server, but this architecture is flexible enough to support pass-through or other multiplexing technologies as needed.
Virtual PortChannel (vPC) allows links that are physically connected to two different Cisco switches to appear to a third, downstream device as coming from a single device and as part of a single port channel. The third device can be a switch, a server, or any other networking device that supports IEEE 802.3ad port channels.
The Cisco NX-OS Software Virtual PortChannel and Cisco Catalyst® 6500 Virtual Switching System (VSS) 1440 are similar technologies. With regard to Cisco EtherChannel® technology, the term “multichassis EtherChannel (MCEC)” refers to either technology interchangeably.
vPC allows the creation of Layer 2 port channels that span two switches. vPC is implemented on the Cisco Nexus 7000 and 5000 Series platforms (with or without the Nexus 2000 Series Fabric Extender). Data centers based on Cisco Nexus technology can use vPC technology to build redundant loop-free topologies. Data centers that use Cisco Catalyst technology can use VSS technology for the same purpose.
The main benefit of running vPC on the data center is that the traffic between client and servers or from server to server can use all the available links, as illustrated by the following figure 1:
Figure 1: vPC Client to Server
Conversely, as figure 2 shows, server-to-client traffic would take advantage of both Cisco Nexus 7000 Series servers, regardless of which one is HSRP active or HSRP standby. HSRP is configured identically to non-vPC topologies, but it has been modified to allow forwarding on both active and standby.
Figure 2: vPC Server to Client
Adding vPC to the Cisco Nexus 5000 Series in the access layer allows further load distribution from the server to the fabric extenders to the Cisco Nexus 5000 Series.
With vPC, the proper tuning delivers the following traffic distribution:
· Client-to-server with fabric extenders in straight-through mode, and server running port channeling on two ports, resulting in four paths (Figure 3)
· Client-to-server with fabric extenders in active/active mode, resulting in four paths (Figure 4)
· Server-to-client with fabric extenders in straight-through mode and server port channeling, producing four different paths (Figure 5)
· Server-to-client with fabric extenders in active/active mode, producing four different paths (Figure 6)
· Server-to-server with fabric extenders in active/active mode, resulting in two paths per direction
Even though a server connected at GigE speeds with two ports will not be able to generate more than 2 gigabits worth of traffic, load distribution along a greater number of paths means a more equal spread of the data center traffic along all the available links. The result is that the data center performs more efficiently.
Figure 3: Client-to-Server Traffic Fex Straight-Through
Figure 4: Client-to-Server Traffic Flows with Fex Active/Active
Figure 5: Server-to-Client Traffic Flows with Fex Straight Through
Figure 6: Server-to-Client Traffic Flows with Fex Active/Active
vPC and VSS
Mixed data centers consisting of both Cisco Catalyst and Cisco Nexus products can interoperate by using both vPC (virtual Port Channels) and VSS (Virtual Switching System) technologies.
Both vPC and VSS allow the creation of PortChannels that span two switches. Currently, vPC is implemented on the Cisco Nexus 7000 and 5000 Series platforms. VSS is implemented on the Cisco Catalyst 6500 Virtual Switching System 1440 platform.
The VSS 1440 is a feature on the Cisco Catalyst 6500 Series Switches that effectively allows the clustering of two physical chassis into a single logically managed entity. The Virtual Switching System acts as a single entity from the network control plane and management perspective. As such, VSS appears as a single logical switch or router to the neighboring devices.
From a data plane and traffic forwarding perspective, both switches in the Virtual Switching System actively forward traffic. The VSS has 800+ Mpps of IPv4 unicast lookup performance. The switch fabrics of both switches are also in an active state, allowing the VSS to achieve aggregate switch fabric capacity of 1.44 Tbps. The Virtual Switch Link (VSL) that connects the physical switches is a standard 10 Gigabit Ethernet EtherChannel® link and is used to carry control traffic between the switches (<5 percent of a 10 Gigabit Ethernet link) in addition to user traffic. A VSL can be configured with up to 8 10 Gigabit Ethernet connections, but typically 2 10 Gigabit Ethernet links are sufficient for redundancy. Each switch actively forwards traffic and does not depend on the bandwidth of the VSL to get full 1.44 Tbps throughput. In addition, an administrator can provision additional links between the two switches if required.
VSS has a single gateway IP address and offers full first hop redundancy. With VSS, configuration is not only greatly simplified; it also eliminates the need for these gateway redundancy protocols and the associated overhead of these protocols.
VSS 1440 (Figure 7) simplifies network complexity and management overhead by 50 percent:
Figure 7. VSS 1440 Loop-Free Physical Topology Compared to Traditional Network
VSS provides deterministic sub-200-ms stateful convergence:
1. In a VSS, a virtual switch member failure results in an interchassis stateful failover with no disruption to applications that rely on network state information. VSS eliminates Layer 2/Layer 3 protocol reconvergence if a virtual switch member fails, resulting in deterministic, sub-200-ms stateful virtual switch recovery. Unlike VSS, the traditional network design does not offer deterministic convergence times as the convergence depends on the following parameters:
· Gateway protocol convergence (HSRP/VRRP state changes)
· Routing protocol reconvergence (Open Shortest Path First/Enhanced Interior Gateway Routing Protocol [OSPF/EIGRP] routing process)
· Spanning Tree Protocol topology convergence (root changes to the standby switch)
· Number of VLANs or subnets, because multiple protocol convergence is unpredictable and in the range of few seconds
2. VSS utilizes EtherChannel (802.3ad or PAgP or Manual ON mode) for deterministic, sub-second Layer 2 link recovery, unlike convergence based on Spanning Tree Protocol in a traditional network design. Spanning Tree Protocol requires the blocking port to go forwarding if the active link fails, and depending on the number of VLANs, the blocked link time to forward may be varied. With VSS, all links are forwarding at all times, and loss of one of the uplinks just represents a loss of link in EtherChannel. Traffic going through the still active link continues to get forwarded with no disruption, while the traffic that was sent on the now failed link is sent over the remaining active link(s). (Cisco allows up to 8 links in an EtherChannel bundle.)
3. VSS maximizes the available bandwidth in an already installed network infrastructure:
· VSS activates all available Layer 2 bandwidth across redundant Cisco Catalyst 6500 Series Switches. VSS also maximizes the link utilization on these connections with even and granular load balancing based on Cisco EtherChannel or standards-based 802.3ad protocol. In traditional networks, Spanning Tree Protocol blocks ports to prevent loops. The blocked ports are not utilized. An advanced design with Spanning Tree Protocol involves VLAN-based load balancing, which still does not evenly load balance the links in a typical campus network.
· VSS allows standards-based link aggregation (802.3ad) for server network interface card (NIC) teaming across redundant data center switches, maximizing server bandwidth throughput.
The main difference between VSS and vPC is that vPC doesn’t unify the control plane, while VSS does: