Thursday, 16 October 2014

Data Center Design: Example Overview Part-II:

Virtual Device Context (VDC) Design

Continuing from Part-I, in the main DC, the Core, DC Aggregation, and DCI Modules would be deployed on the same physical devices – the Nexus 7010 Core Switches. The separation between these modules would be done using the Virtual Device Context (VDC) feature of the Nexus 7000 devices. The main DC and main office building core switches were defined to have the following VDCs:

·         Core VDC

·         DC-AGG VDC

·         OTV VDC

In  the main office building, the Core, DC Aggregation, and DCI modules would be deployed on the same physical devices – the Nexus 7009 Core Switches. The main office building core switches will therefore have the following VDCs:

·         Core VDC

·         DC-AGG VDC

·         OTV VDC

 

VDC Overview

Starting with Supervisor 2/Cisco NX-OS 6.1, the VDC feature allows up to four (4) separate virtual switches, and one (1) Admin VDC to be configured within a single Nexus 7000. The Admin VDC is optional and the creation of the admin VDC is not a required task for device operations.

Architecturally the VDCs run on top of a single NX-OS kernel and OS infrastructure. Each VDC

represents a separate instance of the control plane protocols, as illustrated below:

 

VDC Independence

A VDC runs as a separate logical entity within the physical device, maintains its own unique set of running software processes, has its own configuration, and can be managed by a separate administrator.

 

VDCs virtualize the control plane as well, which includes all those software functions that are processed by the CPU on the active supervisor module. The control plane supports the software processes for the services on the physical device, such as the routing information base (RIB) and the routing protocols. When a VDC is created, the Cisco NX-OS software takes several of the control plane processes and replicates them for the VDC. This replication of processes allows VDC administrators to use virtual routing and forwarding instance (VRF) names and VLAN IDs independent of those used in other VDCs. Each VDC administrator essentially interacts with a separate set of processes, VRFs, and VLANs.

 

All the Layer 2 and Layer 3 protocol services run within a VDC. Each protocol service started within a DC runs independently of the protocol services in other VDCs. The infrastructure layer protects the protocol services within VDC so that a fault or other problem in a service in one VDC does not impact other VDCs. The Cisco NX-OS software creates these virtualized services only when a VDC is created.

 

Each VDC has its own instance of each service. These virtualized services are unaware of other

VDCs and only work on resources assigned to that VDC. Only a user with the network-admin role

can control the resources available to these virtualized services.

 

Although CPU resources (Supervisor module) are not truly independent between the VDCs, the pre-emptive multi-tasking nature of OS does ensure that no single process can hog the CPU, including processes across VDCs. Even if CPU usage is driven up, all processes will have fair access to CPU clock cycles in order to function.

 

Memory is not controlled on a per VDC basis, instead it is controlled at a per process level. The services that run on the platform have limits set to their maximum available accessible memory that is enforced by the kernel. Therefore, any single process can only access the defined amount to an instance of that process. This prevents an errant process or memory leak in a process from consuming a significant amount of overall system memory.

 

There are additional resources that are applied system wide (allocated to all VDCs as a whole), thereby offering lesser amount operational independency between the VDCs. These global resources are discussed in the following section.

 

Default VDC

The physical device always has one VDC, the default VDC (VDC 1). When a user first logs into a new Cisco NX-OS device, it goes to the default VDC. User must be in the default VDC to create, change attributes for, or delete a non-default VDC. As mentioned earlier, the Cisco NX-OS software can support up to four VDCs (and 1 Admin VDC), including the default VDC. This means that a user can create up to three VDCs. If the user has the network-admin role privileges, physical device and all VDCs can be managed from the default VDC.

VDC login Process

 

VDC Resources

In respect of allocation to VDCs, Nexus switch resources are divided into three main categories:

Global, Dedicated and Shared. VDC1 is the default VDC, and controls the creation, deletion and

resource allocation of all other VDCs.

 

Global Resources are assigned to and controlled by the default VDC:

• Boot image configuration

• Software feature licensing

• Ethanalyzer session

• Control Plane Policing (CoPP) configuration

• Quality of Service (QoS) queuing configuration

• Allocation of resources to other VDCs

• Console port

• Connectivity Management Processor (CMP)

• Network Time Protocol (NTP) Server configuration

• Port channel hashing algorithm configuration

 

Every VDC has its own dedicated resources, which are assigned solely to that VDC:

• Physical interfaces

• Layer 3 and layer 2 protocol stacks

• Per-VDC management configuration

 

Shared resources are available to all VDCs:

• Out of band management interface (interface mgmt0). Each VDC has its own IP address on this interface.

 

NX-OS allows allocation of the following resources to be controlled by the configuration of minimum and maximum resource guarantees:

• VLAN

• SPAN sessions

• VRFs

• Port channels

• Route memory

 

 

Route Memory

While there is no exact mapping between the number of routes that can be stored per megabyte of memory allocated, 16 megabytes of route memory is enough to store approximately 11,000 routes with 16 next hops each. The default memory allocation to IPv4 routes is 58 megabytes.

 

The following command is helpful to understand the memory consumption for unicast routing tables. Using the show routing memory estimate command on the Nexus 7000, it can be seen that a VDC using the default resource template will support approximately 12,000 routes (assuming 8 next-hops for each route):

 

!

Switch# sh routing memory estimate route 12000 next-hops 8

 

Shared memory estimates:

Current max 8 MB; 6526 routes with 16 nhs

in-use 1 MB; 97 routes with 1 nhs (average)

Configured max 8 MB; 6526 routes with 16 nhs

Estimate 8 MB; 12000 routes

!

It should also be noted that the route memory resource allocation does not permit to use different

values for the maximum and minimum memory limits.

To allow a VDC to communicate with other devices in the network, the administrator must explicitly allocate interfaces to a VDC, except for the default VDC that will have control over all interfaces that are not otherwise allocated.

 

At the time of authoring this, for the relevant NX-OS software release, the F2e modules needed to be in their own dedicated VDC with no other module types being part of the same VDC. This restriction was removed in the next NX-OS release (6.2).

 

Allocation of interfaces should be done based on the type of hardware in use. For example a ‘M1-Series’ 32-port 10G line card doesn’t allow ports belonging to the same port-group to be part of different VDCs. A dedicated line card based approach for VDCs offers multiple benefits with the only trade-off being the consumption of extra-line cards:

• Efficient usage of Line Card hardware resources like MAC Table, FIB TCAM. The overall chassis level MAC Table/FIB TCAM limit can be scaled to 3 times 128K limit.

• Brings much closer to a secure architecture by limiting the dependency between the VDC environments.

Note: There is no port group restriction on M2 Series modules. Any port in M2 Series modules can be placed in any VDC.

 

Communication Between VDCs

The Cisco NX-OS software does not support direct communication between VDCs on a single

physical device. There must be a physical connection from a port allocated to one VDC to a port

allocated to the other VDC to allow the VDCs to communicate. Each VDC has its own VRFs for

communicating with other VDCs.

 

Customer Data Center building Network VDC Design Summary

The following points summarize the VDC design for the main Data Center building Network:

·         Core Nexus 7010 switches will be configured with three VDCs:

o (Default) VDC for Core module (Core VDC)

o VDC for DC Aggregation module (DC-AGG VDC)

o VDC for DCI module (OTV VDC)

·         Each VDC will be allocated dedicated physical interfaces:

o The M1L linecards will be allocated to the Core VDC

o The M2 linecard will have most of its ports in the Core VDC

o The F2e line cards (2 on each switch) will be allocated to the DC-AGG VDC

o Three ports from the M2 line cards will be allocated to the OTV VDC

·         Core VDC in each Nexus switch will have two point-to-point Tengigabit Ethernet Layer-3 links, one to the local DC-AGG VDC and the second to the DC-AGG VDC in the second Core switch.

·         Core VDC in each Nexus switch will have one point-to-point Tengigabit Ethernet Layer-3 link to the local OTV VDC.

·         Core VDC in each Nexus switch will have two point-to-point Tengigabit Ethernet Layer-3 links, one to each Distribution switch. As the Distribution switches will be deployed using VSS the Core VDC will only see the same OSPF neighbor across both links.

·         The links between the Core VDC and the WAN Edge routers will be configured as point-topoint Layer-3 links.

·         The inter DC links will be terminated on the Core VDC, one link on each switch. These will be configured as point-to-point Layer-3 links with a /29 subnet.

·         The DC-AGG VDC in each Core switch will be dual homed to the two Core VDCs via pointto-point Layer-3 links.

·         The DC-AGG VDC in each Core switch will have separate vPC interconnects with the both the local OTV VDC as well as the OTV VDC on the second switch. This will be an 802.1q trunk carrying VLANs requiring Layer-2 extension.

·         Core firewalls will also be connected to the DC-AGG VDCs via vPC. This will be an 802.1q trunk carrying all DC VLANs.

·         F5 load balancers will be connected to the DC-AGG VDCs via vPC. This port-channel will be made part of the respective VLAN as an access switchport.

·         Two ports will be interconnecting the OTV VDC with the Aggregation layer of the Data Center as follows:

o One optical Tengigabit Ethernet link will be connected to the local DC-AGG VDC

o A second optical Tengigabit Ethernet link will be connected to the DC-AGG VDC in the other Core switch

·         The VDCs will use the default VDC resource allocation template.

 

In order to provide an acceptable level of resiliency in the DC-AGG VDC it was decided that one F2e module will be added to each Core switch. This means a total of two F2e modules will be deployed and these will be obtained by removing one F2e module from each of the Aggregation switches in Row 7 of the Data Center.

 

Cutomer main building Network VDC Design Summary:

The following points summarize the proposed VDC design for main office building Network:

·         Core Nexus 7009 switches will be configured with three VDCs:

o (Default) VDC for Core module (Core VDC)

o VDC for DC Aggregation module (DC-AGG VDC)

o VDC for DCI module (OTV VDC)

·         Each VDC will be allocated dedicated physical interfaces:

o The M1L linecards will be allocated to the Core VDC

o The M2 linecard will have most of its ports in the Core VDC

o The F2e line cards (1 on each switch) will be allocated to the DC-AGG VDC

o Three ports from the M2 line cards will be allocated to the OTV VDC

·         Core VDC in each Nexus switch will have two point-to-point Tengigabit Ethernet Layer-3  links, one to the local DC-AGG VDC and the second to the DC-AGG VDC in the second Core switch.

·         Core VDC in each Nexus switch will have one point-to-point Tengigabit Ethernet Layer-3 link to the local OTV VDC.

·          Core VDC in each Nexus switch will have two point-to-point Tengigabit Ethernet Layer-3 links, one to each Distribution switch.

·         The inter DC links will be terminated on the Core VDC, one link on each switch. These will be configured as point-to-point Layer-3 links with a /29 subnet.

·         The DC-AGG VDC in each Core switch will be dual homed to the two Core VDCs via point-to-point Layer-3 links.

·         The DC-AGG VDC in each Core switch will have separate vPC interconnects with the both the local OTV VDC as well as the OTV VDC on the second switch. This will be an 802.1q trunk carrying VLANs requiring Layer-2 extension.

·         Two ports will be interconnecting the OTV VDC with the Aggregation layer of the Data Center as follows:

o One optical Tengigabit Ethernet link will be connected to the local DC-AGG VDC

o A second optical Tengigabit Ethernet link will be connected to the DC-AGG VDC in the other Core switch

·         The VDCs will use the default VDC resource allocation template.

 

Using The Default VDC

Generally it is recommended to dedicate the default VDC as an ‘Admin’ VDC and not run any data path traffic through it. This is mainly because of the fact that this VDC is used to create new VDCs and allocate resources to them as well as to manage configuration of those resources that can only be managed from the default VDC (e.g. CoPP configuration). Access to the default VDC means ability to create/modify existing VDCs.

 

If any environment uses the Default VDC for data traffic, then their management should ensure that operational resources accessing the Core VDC for normal operations are allocated the ‘vdcadmin’ role and not the ‘network-admin’ role.

 

VDC Naming Convention:

• (Default) VDC for Core network (Core VDC)

• VDC for DC Aggregation network (DC-AGG VDC)

• VDC for OTV (OTV VDC)

 

The name of Core VDC is the hostname of the switch and consequently there is no explicit configuration required for naming the Core VDC.

 

The DC Aggregation VDC and the OTV VDC would be created using the following names: DC-AGG and OTV respectively.

 

By default the hostname of the non-default VDCs is the hostname of the switch with the VDC name tagged onto the end. This means, that by default, the hostname displayed for the OTV VDC will a combination of the Core VDC hostname and ‘OTV’ tagged onto the end. This behavior can be modified to have only “DCAGG” or “OTV” displayed when logging into this VDC, using the following configuration template:

 

!

configure terminal

no vdc combined-hostname

!

 

VDC Interface Allocation

By default all interfaces on a Nexus 7000 are part of the Default VDC. For this specific project, the Default VDC was planned to be used for the Core module so there was no requirement to explicitly allocate interfaces to the Core module VDC as it would be the Default VDC.

 

The M1L and M2 line cards would have most of their ports allocated to the Core VDC. Three ports from the M2 line cards would be allocated to the OTV VDC. The F2e line cards (2 on each Core switch in main DC, 1 on each Core switch in main office building) would be allocated to the DC-AGG VDC.

 

For the DC-AGG and OTV VDCs, since they will be new VDCs (VDC 2 and 3 respectively), interfaces would need to be allocated explicitly. The Core Nexus 7010/7009 would be installed with M1, M2, and F2 linecards. The F2 linecards need to be installed in a VDC of their own. Therefore, if all linecards are installed and device is booted for the first time, by default, the N7K boots with the M linecards in the default VDC and the F2 linecards disabled.

 

Shared Management Interface

The Nexus 7000 is equipped with a dedicated Management interface port on each Supervisor engine. Only the port on the active Supervisor is available. By default, this management interface resides within a special ‘management’ VRF, which is completely separate from the default VRF and any other VRFs that may be created. It is not possible to move the management Ethernet port to any other VRFs, nor to assign other system ports to management VRF. Because of the existence of the dedicated management VRF, management Ethernet port cannot be used for data traffic. The management Ethernet port cannot be trunked as well.

One feature of the Supervisor management interface is that it exists within all VDCs (rather than a single VDC only). The management interface also carries a unique IP address in every VDC. In this way, a distinct management IP address can be provided for administration of a single VDC.

 

In this case, the management interface on the Nexus 7010/7009 Core switches was to be used for their management. Each VDC would have its own IP Address for the management interface.

 

VDC License Requirements: Creating non-default VDCs requires a special license.

 

VDC Configuration

VDC creation has the following prerequisites:

Login to the default VDC with a username that has the network-admin user role.

Make sure the appropriate license is installed.

Assign a name for the VDC.

Allocate resources available on the physical device to the VDCs.

Configure an IPv4 or IPv6 address to use for connectivity to the VDC.

 

Creating a VDC Resource Template

This step was not required for the VDC configuration in the customer's networks as the default resource template would be used. However, if required in the future, the following steps were recommended to be used to define a new resource template:

 

!

vdc resource template <Template-Name>

limit-resource vlan minimum 16 maximum 4094

limit-resource monitor-session minimum 0 maximum 2

limit-resource vrf minimum 16 maximum 500

limit-resource port-channel minimum 16 maximum 256

limit-resource u4route-mem minimum 16 maximum 128

limit-resource u6route-mem minimum 4 maximum 4

limit-resource m4route-mem minimum 8 maximum 8

limit-resource m6route-mem minimum 2 maximum 2

!

Creating a VDC

Below example shows how to create the DC-AGG VDC and OTV VDCs on the Nexus 7010 switches. The same template can be used to create VDCs in 7009 switches:

!

Nexus01DC# configure terminal

Nexus01DC(config)# vdc DC-AGG

Nexus01DC(config)# vdc OTV

!

 

Allocating Interfaces to a VDC

The following example shows how to allocate resources to a VDC:

 

!

Nexus01DC# configure terminal

Nexus01DC(config)# vdc <VDC-Name>

Nexus01DC(config-vdc)# allocate interface ethernet <slot/port>

! Allocate interfaces as needed

!

 

Applying Resource Template to a VDC

Below shows how to apply a resource template to a VDC:

!

Nexus01DC(config)# vdc <VDC-Name>

Nexus01DC(config-vdc)# template <Template-Name>

!

Initializing a New VDC

A newly created VDC is much like a new physical device. To access a VDC, a user must first initialize it. The initialization process includes setting the VDC admin user account password and optionally running the setup script. The setup script helps perform basic configuration tasks such as creating more user accounts and configuring the management interface.

 

The VDC admin user account in the non-default VDC is separate from the network admin user account in the default VDC. The VDC admin user account has its own password and user role.

!

Nexus01DC# switchto vdc DC-AGG

!

 

VDC resource limits can be changed by applying a new VDC resource template anytime after the VDC creation and initialization. Changes to the limits take effect immediately except for the IPv4 and IPv6 route memory limits, which take effect after the next VDC reset, physical device reload, or physical device stateful switchover.

 

VDC configuration should be done prior to applying any other configuration to the Nexus 7000 switches.

 

Saving VDC Configuration

From the VDC, a user with the vdc-admin or network-admin role can save the VDC configuration to the startup configuration. However, a user can also save the configuration of all VDCs to the startup from the default VDC.

!

switchto vdc OTV

copy running-config startup-config

switchback

copy running-config startup-config vdc-all

!

 

To be continued…Next: Fabric Path in the DC and Cloud Infrastructure considerations

 

Wednesday, 8 October 2014

Data Center Design for Cloud: Example Overview Part-I

More often than not I am asked about an “example” of a Data Center design that I have worked on. This write up relates to a low level design of a fictional but typical customer for the infrastructure components and is based on finalized typical Basis of Design for the model data center.

 

As part of the Data Center Network infrastructure, the following devices will be covered:

·         Two (2) Nexus 7010 switches as Core switches

·         Fourteen (14) Nexus 7010 as Aggregation switches (see note below)

·         Two (2) Nexus 5596 switches as Aggregation switches (see note below)

·         Two (2) Catalyst 6509-E switches as Distribution switches

·         Ten (10) Catalyst 3750X switches as Access switches

·         Two (2) Cisco 3945E routers as Perimeter routers

·         Two (2) Cisco 3945E routers as WAN routers

·         Sixteen (16) Catalyst 3750X switches as Server management switches

·         Two (2) Catalyst 3750X switches as WAN edge switches

·         Two (2) Catalyst 3560X switches as Perimeter switches

·         Two (2) Catalyst 3750X switches as DMZ switches

Assumptions and Caveats

·         OTV adds an additional 42 bytes to the IP header. This required the links between the two redundant data centers (DC1 and DC2) to be configured for at least 1560 bytes MTU. It was assumed that an MTU of 1600 bytes will be available on the links between these two sites.

·         It was assumed that for each optical interconnect, the distance will be well within the maximum distance specified for the optical transceiver ordered to provision the said optical link.

·         It is assumed that all electrical (UTP) interconnects would be deployed within the standard 1000BaseT Ethernet distance specifications.

·         Cabling density was adequate to support all required optical and electrical interconnects

 

Network Background

The customer currently has one Data Center in their main building. Additionally, they also have two other branches located in two branch offices. As part of this project customer was building a new Data Center at DC1 located. This Data Center would be connected with the existing main building as well as the standalone two remote branches.

 

Following figure shows the high level design that was agreed during the pre-sales phase of the project.

 

The main building network consists of multiple network modules, each with its dedicated

function. Following points summarize the modules at a high level:

·         Campus Module – This part of the network provides the infrastructure for main building end user connectivity.

·         Core Module – The Core module interconnects the various main building Data Center modules together for seamless data flow.

·         DC Aggregation Module – This module provides the infrastructure for server connectivity.

·         Extranet Module – External partners and other B2B services are securely terminated in this module.

·         WAN Edge Module – The WAN Edge module extends the reachability of the main building to remote branches.

·         Perimeter Module – This module extends Internet service to main building and also allows the customer to host Internet facing services from this Data Center.

Network Objectives

Based on the understanding of the requirements, the design should ensure best practices are followed, where applicable, in order to achieve the following:

·         High Availability

·         Scalability

·         Flexibility of extending Layer-2 domains between DC1 and DC2

·         Infrastructure security

Main building Layout

The main building was a new four-storey building that would house both a new Data Center as well as internal users. The ground floor and the first three floors would have users sitting in them while the fourth floor would be dedicated for Data Center facility.

 

Each floor will have one  room where Access switches will be installed. In addition to the user floors there will be two Customer Service Areas that are separated from the main building, however, are in close physical proximity.

Data Center Floor

The fourth floor of the main building would be dedicated for use as a Data Center facility. This floor would have the following rooms:

·         Two Main Distribution Areas/Rooms (MDAs). Each room will have eight (8) racks installed.

·         Two ISP Rooms with each room equipped with one (1) rack.

·         Data Center Hall with a total of eight (8) rows of racks with each row deployed with fourteen (14) racks.

·         Staging room outside the main Data Center Hall. The Staging room will have a total of six (6) racks.

·         A  DC Control Room with one (1) rack.

 

Hardware Distribution in Customer Service Areas

 

Each Customer Service Area will have one Cisco Catalyst 3750X switch (WS-C3750X-48PF-S) installed in it. Each switch will be equipped with two 1100-watt power supplies and one dual-port Tengigabit Ethernet service module.

 

  Ground Floor

There will be two (2) Cisco Catalyst 3750X switches (WS-C3750X-48PF-S) installed in the IDF in Ground Floor. Each switch will be equipped with two 1100-watt power supplies and one dual-port Tengigabit Ethernet service module.

 

  First Floor

There will be two (2) Cisco Catalyst 3750X switches (WS-C3750X-48PF-S) installed in the IDF in First Floor. Each switch will be equipped with two 1100-watt power supplies and one dual-port Tengigabit Ethernet service module.

 

  Second Floor

There will be two (2) Cisco Catalyst 3750X switches (WS-C3750X-48PF-S) installed in the IDF in Second Floor. Each switch will be equipped with two 1100-watt power supplies and one dual-port Tengigabit Ethernet service module.

 

  Third Floor

There will be two (2) Cisco Catalyst 3750X switches (WS-C3750X-48PF-S) installed in the IDF in Third Floor. Each switch will be equipped with two 1100-watt power supplies and one dual-port Tengigabit Ethernet service module.

 

  Fourth Floor

 

       Main Distribution Area 1 (MDA-1)

The following hardware will be installed in MDA-1:

• One Nexus 7000 Core switch – (N7K-C7010)

• One Catalyst 3750X DMZ switch - (WS-C3750X-48T-S)

• One Catalyst 6500 Distribution switch – (WS-C6509-E)

• One Cisco 3945E WAN edge router – (CISCO3945E-SEC/K9)

• One Catalyst 3750X WAN edge switch – (WS-C3750X-48T-E)

• One Cisco ASA 5585 Perimeter/DMZ firewall – (ASA5585-S10P10XK9)

• One Cisco ASA 5585 WAN Edge firewall – (ASA5585-S20X-K9)

• One Cisco ASA 5585 Core firewall – (ASA5585-S60P60-K9)

• One Cisco ASA 5585 Distribution firewall – (ASA5585-S20X-K9)

 

       Main Distribution Area 2 (MDA-2)

The following hardware will be installed in MDA-2:

• One Nexus 7000 Core switch – (N7K-C7010)

• One Catalyst 3750X DMZ switch - (WS-C3750X-48T-S)

• One Catalyst 6500 Distribution switch – (WS-C6509-E)

• One Cisco 3945E WAN edge router – (CISCO3945E-SEC/K9)

• One Catalyst 3750X WAN edge switch – (WS-C3750X-48T-E)

• One Cisco ASA 5585 Perimeter/DMZ firewall – (ASA5585-S10P10XK9)

• One Cisco ASA 5585 WAN Edge firewall – (ASA5585-S20X-K9)

• One Cisco ASA 5585 Core firewall – (ASA5585-S60P60-K9)

 

ISP Room 1

Following equipment will be installed in ISP Room 1:

• One Cisco 3945E Perimeter router – (CISCO3945E-SEC/K9)

• One Cisco 3560X Perimeter switch – (WS-C3560X-24T-S)

 

ISP Room 2

Following equipment will be installed in ISP Room 1:

• One Cisco 3945E Perimeter router – (CISCO3945E-SEC/K9)

• One Cisco 3560X Perimeter switch – (WS-C3560X-24T-S)

 

Data Center Halls

Data Center Halls will have the Nexus 7010 and Nexus 5596 access switches installed as explained below:

·         The first seven rows will each have two Nexus 7010 (N7K-C7010) switches installed. There will be one switch installed in Rack 1 and another in Rack 9 of each row.

·         The eighth row will have two Nexus 5596 (N5K-C5596UP-FA) switches. One switch will be installed in Rack 1 while the other will be installed in Rack 9.

·         Each row will have two Catalyst 3750X (WS-C3750X-48T-S) server management switches

·         installed, one in Rack 1 and the other one in Rack 9.

Network Elements

General hardware description of the equipment procured by CBO is described below. For convenience, URLs pointing to the respective product data sheet have been provided at the end of each section.

 

Cisco Nexus 7000 9-Slot Chassis Switches

Two (2) Cisco Nexus 7000 9-Slot Chassis Switches will be deployed in the existing main building. Each chassis will be populated with the following modules:

·         2 x N7K-AC-6.0KW

·         2 x N7K-SUP2 (slots 1-2)

·         1 x N7K-M224XP-23L (slot 3)

·         5 x N7K-7009-FAB-2

·         1 x N7K-M148GT-11L (slot 4)

·         1 x N7K-M148GS-11L (slot 5)

·         1 x N7K-F248XP-25E (slot 9)

 

Figure: Nexus 7009 Core Switch – Proposed Slot Allocation

 

Table 1 Nexus 7009 Core Switch – Interface Numbering

 

Nexus 7000 9-Slot Chassis

Following figure provides front and rear view of the Cisco Nexus 7000 9-slot chassis, and below

are its main features:

 

Figure:  Nexus 7000 9-Slot Chassis – Front View

 

·         Side-to-side airflow increases the system density in a 14RU footprint, optimizing the use of rack space. The optimized density provides the capability to stack up to three 9-slot chassis in a 42RU rack.

·         Independent variable-speed system and fabric fans provide efficient cooling capacity to the entire system. Fan-tray redundancy features help ensure reliability of the system and support for hot swapping of fan trays.

·         I/O modules, supervisor modules, and fabric modules are accessible from the front.Power supplies and fan trays are accessible from the back.

·         Integrated cable management system designed to support the cabling requirements of a fully configured system at either or both sides of the switch, allowing outstanding flexibility. All system components can easily be removed with the cabling in place, providing ease of maintenance tasks with little disruption.

·         A series of LEDs at the top of the chassis provide a clear summary of the status of the major system components, alerting operators to the need to conduct further investigation. These LEDs report the power supply, fan, fabric, supervisor, and I/O module status.

·         A purpose-built optional front-module door provides protection from accidental interference with both the cabling and modules installed in the system. The transparent front door allows easy observation of cabling and module indicators and status lights without any need to open the doors, reducing the likelihood of faults caused by human interference. The door supports a dual-opening capability for flexible operation and cable installation while fitted. The door can easily be completely removed for both initial cabling and day-to-day management of the system.

 

Nexus 7000 6.0-kW AC Power Supply Module – N7K-AC-6.0KW

The 6.0-kW AC power supply module for the Cisco Nexus 7000 Series is a dual 20A AC input power supply. When both inputs are at high line nominal voltage (220 VAC), the power output is 6000W. Connecting to low line nominal voltage (110 VAC) or using just one input will produce lower output power levels. Table 2 shows the available power output for the input options.

Figure Cisco Nexus 7000 6.0-kW AC Power Supply Module

 

The Cisco Nexus 7000 Series AC power supply module delivers fault tolerance, high efficiency,load sharing, and hot-swappable features. Each Cisco Nexus 7000 Series chassis can accommodate multiple power supplies, providing both chassis-level and facility-level power fault tolerance. Designed to address high-availability requirements, the power supplies incorporate internal component-level monitoring, temperature sensors, and intelligent remote-management capabilities.

The power supply modules are fully hot swappable, helping ensure no system interruption during installation or upgrades, and they are fitted at the back of the Cisco Nexus 7000 Series Switch chassis, allowing installation and removal without disturbing the network cabling on the front. Cisco Nexus 7000 Series systems can operate in four user-configurable power-redundancy modes, summarized in the following table:

 

Power Redundancy Modes

It was recommended that the customer configure power supply redundancy mode as “redundant”. This could be achieved using the following command: power redundancy-mode redundant

Power Calculation of the Nexus 7000 9-Slot Chassis

The power calculator on the CCO can be used to determine how much power a system configuration requires and the available redundancy modes: http://tools.cisco.com/cpc/.

 

The datasheet for the N7K-AC-6.0KW can be found at the following URL:

http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/ps9512/Data_Sheet_C78-

437761_ps9402_Products_Data_Sheet.html

 

Second-Generation Nexus 7000 Supervisor Modules – N7K-SUP2

The Second-Generation Nexus 7000 Supervisor Modules (Figure 6) scale the control-plane and data-plane services for the Cisco Nexus 7000 Series Switches in scalable data center networks.They are designed to deliver scalable control-plane and management functions for the Cisco Nexus 7000 Series chassis.

 

The supervisor controls the Layer 2 and 3 services, redundancy capabilities, configuration management, status monitoring, power and environmental management, and more. It provides centralized arbitration to the system fabric for all line cards. The fully distributed forwarding architecture allows the supervisor to support transparent upgrades to I/O and fabric modules with greater forwarding capacity. Two supervisors are required for a fully redundant system, with one supervisor module running as the active device and the other in hot-standby mode, providing exceptional high-availability features in data center-class products.

 

Cisco Nexus 7000 Series Supervisor 2 Module

The module is based on a quad-core Intel Xeon processor with 12 GB of memory and scales the control plane by harnessing the flexibility and power of the quad cores.

 

Cisco Nexus 7000 Series Supervisor 2 Module Connectivity and Indicators

 

The Nexus 7000 9-slot chassis would be deployed with redundant supervisor modules. This is required for high availability. One supervisor module will be operationally active while the other will be serving as a hot backup.

 

The datasheet for the N7K-SUP2 can be found at the following URL:

http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/data_sheet_c78-

710881.html

 

Cisco Nexus 7009 Series Fabric-2 Modules – N7K-7009-FAB2

The Cisco Nexus 7000 Series Fabric-2 Modules for the Cisco Nexus 7000 9-Slot Series chassis are separate fabric modules that provide parallel fabric channels to each I/O and supervisor module slot. Up to five simultaneously active fabric modules work together delivering up to 550 Gbps per slot. The fabric module provides the central switching element for fully distributed forwarding on the I/O modules.

 

Switch fabric scalability is made possible through the support of from one to five concurrently active fabric modules for increased performance as your needs grow. All fabric modules are connected to all module slots. The addition of each fabric module increases the bandwidth to all module slots up to the system limit of five modules. The architecture supports lossless fabric failover, with the remaining fabric modules load balancing the bandwidth to the entire I/O module slot.

 

Cisco Nexus 7009 Fabric-2 Module

The datasheet for the N7K-7009-FAB2 can be found at the following URL:

http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/ps9512/data_sheet_c78-

684211_ps9402_Products_Data_Sheet.html

 

A High Level View of a Cloud supporting DC

Following is a high level logical topology of based on Cisco’s Virtualized Multi-tenant Data Center (VMDC) 2.x architecture solutions set.

 

To be Continued: Part-II will have Unified Computing System Integration, Topology and options for DCI (OTV or not to OTV).

 

 

Breakfast At Serengeti

Breakfast At Serengeti
Lion's Share

The Ngorongoro Family

The Ngorongoro Family
Click on the Picture Above To Make It Larger

Tabloid Time: The Aliens Are a'Landing ?!.. ;-)

At the risk of being ridiculed and being labelled a freak, I shall like to draw everyone's attention to the following recent events....If you watch the videos then turn on the sound for the commentary...



Fireball across Ausin, Texas (16th Feb 2009). According to BBC, apparently, its NOT debris from a recent satellite collision...:
http://news.bbc.co.uk/1/hi/world/7891912.stm
http://us.cnn.com/2009/US/02/15/texas.sky.debris/index.html

Same in Idaho in recent times. NO meteor remains found yet: http://news.bbc.co.uk/1/hi/sci/tech/7744585.stm

Exactly same in Sweden: http://news.bbc.co.uk/1/hi/world/europe/7836656.stm?lss

This was recorded on 25th Feb 2007 in Dakota, US:
http://www.youtube.com/watch?v=cVEsL584kGw&feature=related

This year has seen three of the spookiest UFO videos surface, with people in India, Mexico and even in space, NASA, spotting things they couldn't explain: http://www.youtube.com/watch?v=7WYRyuL4Z5I&feature=related

CHECK out this one on 24th Januray, 2009 in Argentina close to Buenos Aires:
You tube: www.youtube.com/
Press:
Press Coverage

AND Lastly, and more importantly, from Buzz Aldrin on Apollo 11 : http://www.youtube.com/watch?v=XlkV1ybBnHI

Heh?! Don't know how authentic these news are... don't even know if these are UFO's or meteors or ball lightning or something else. But, if meteors, then where are the meteorites ? However, I see no reason why life cannot exist in other planets and why they could not be sneaking around here :-) . I for one, have long suspected some of my relations to be space aliens or at least X-people from X-files :-)

I am waiting for a job on an Alien spaceship myself. :-)


Giraffes in Parallel Universe

Giraffes in Parallel Universe
At Lake Manyara

Serengeti Shall Never Die

Serengeti Shall Never Die
Wildebeeste Calf starts running only 5 min. after being born. CLICK on the pitcture to view Slideshow