Wednesday, 29 October 2014

Snort on Ubuntu 14.04 from Sourcecode with Barnyard, SnortReport, Acid

Snort is an open source network intrusion detection system, capable of performing real-time traffic analysis and packet logging on IP net-works. It can perform protocol analysis, content searching/matching and can be used to detect a variety of attacks  and  probes,  such  as buffer overflows, stealth port scans, CGI attacks, SMB probes, OS fingerprinting attempts, and much more. Snort has three primary uses.  It can be used as a straight packet sniffer like tcpdump , a packet logger  , or as a full blown network intrusion detection/prevention system system. 

Main features introduced in 2.9.6-2.9.7:

·         Added additional support for Heartbleed detection within the SSL preprocessor to improve performance.

·         A new protected_content rule option that is used to match against a content that is hashed.  It can be used to obscure the full context of the rule from the administrator.

·         Protocol Aware Flushing (PAF) improvements for SMTP, POP, and IMAP to more accurately process different portions of email messages and file attachments.

·         Added ability to test normalization behavior without modifying network traffic.  When configured using na_policy_mode:inline-test, statistics will be gathered on packet normalizations that would have occurred, allowing less disruptive testing of inline deployments.

·         Added improved XFF support to HttpInspect. It is now possible to specify custom HTTP headers to use in place of 'X-Forwarded-For'. In situations where traffic may contain multiple XFF-like headers, it is possible to specify which headers hold precedence.

·         The HTTP Inspection preprocessor now has the ability to decompress DEFLATE and LZMA compressed flash content and DEFLATE compressed PDF content from http responses when configured with the new decompress_swf and decompress_pdf options. This enhancement can be used with existing rule options that already match against decompressed equivalents. Feature rich IPS mode including improvements to Stream for inline deployments. Additionally a common active response API is used for all packet responses, including those from Stream, Respond, or React. A new response module, respond3, supports the syntax of both resp & resp2, including strafing for passive deployments. When Snort is deployed inline, a new preprocessor has been added to handle packet normalization to allow Snort to interpret a packet the same way as the receiving host.

·         Use of a Data Acquisition API (DAQ) that supports many different packet access methods including libpcap, netfilterq, IPFW, and afpacket. For libpcap, version 1.0 or higher is now required. The DAQ library can be updated independently from Snort and is a separate module that Snort links. See README.daq for details on using Snort and the new DAQ.

·         Updates to HTTP Inspect to extract and log IP addresses from X-Forward-For and True-Client-IP header fields when Snort generates events on HTTP traffic.

·         A new rule option ‘byte_extract’ that allows extracted values to be used in subsequent rule options for isdataat, byte_test, byte_jump, and content distance/within/depth/offset.

·         Updates to SMTP preprocessor to support MIME attachment decoding across multiple packets.

·          Ability to “test” drop rules using Inline Test Mode. Snort will indicate a packet would have been dropped in the unified2 or console event log if policy mode was set to inline.

·          Two new rule options to support base64 decoding of certain pieces of data and inspection of the base64 data via subsequent rule options.

·          Updates to the Snort packet decoders for IPv6 for improvements to anomaly detection.

 In the latest version of snort some things about compiling  have slightly changed, the libdnet and the Data AcQuisition library (DAQ) needs to be be compiled separately. This post realtes only to compileation and installation of Snort 2.9.x.x from  source code.

LAMP (Linux, Apache, MySQL and PHP) environment:

Installing Apache, PHP and MySQL: 

 

$sudo apt-get -y install apache2 libapache2-mod-php5 mysql-server mysql-common mysql-client php5-mysql libmysqlclient-dev php5-gd php-pear libphp-adodb php5-cli 

 

Get required packages:

 

$ sudo apt-get -y install libwww-perl libnet1 libnet1-dev libpcre3 libpcre3-dev autoconf libcrypt-ssleay-perl libtool libssl-dev build-essential automake gcc make flex bison

 

 

 

 Download and Install libdnet:

There are Ubuntu packages for libdnet but this, I find, an easier way of installing. Download the following file and install it with these commands from your download directory:

$ sudo mkdir /usr/local/snort

$cd /usr/local/snort

$ sudo wget http://libdnet.googlecode.com/files/libdnet-1.12.tgz

$ sudo tar xzvf libdnet-1.12.tgz

$ cd libdnet-1.12/

$sudo ./configure

$sudo  make

$sudo make install

$sudo ln -s /usr/local/lib/libdnet.1.0.1 /usr/lib/libdnet.1

 

 

 Installing and Downloading Data Acquisition API (DAQ): 

Snort 2.9.0 introduces the new Data Acquisition API. We’ll need to download and install it before we set up the core Snort package. Download that package to your Snort machine:

If you need to access the /usr/local/snort via the GUI for copying of files, then go to root and type in:

 

$ gksudo nautilus

This should give a view of the root folders.  Install the package using the following commands: 

$cd /usr/local/snort

$ sudo tar zxvf daq-2.0.4.tar.gz

$cd daq-2.0.4

$ sudo ./configure

$ sudo make

$sudo make install

 Download and Install libpcap:

 $cd /usr/local/snort

 $ sudo wget http://www.tcpdump.org/release/libpcap-1.3.0.tar.gz

$ sudo tar zxvf libpcap-1.3.0.tar.gz

$cd libpcap-1.3.0

$ sudo ./configure

$sudo make

$sudo make install

$echo “/usr/local/lib” >> /etc/ld.so.conf

$ldconfig -v 

Download and Install Snort: 

While we could install the Snort packages from the Ubuntu 14.04 repositories, that doesn’t guarantee the latest and greatest version of Snort being set up so we compile and install the source code. Go to  http://www.snort.org/snort-downloads and download the newest stable version.

The following steps will install Snort into /usr/local/snort but you can change this to a directory of your liking by modifying the paths below. 

Open a command prompt and issue the following commands from the directory where you downloaded the Snort 

$ sudo tar zxf snort-2.9.6.1.tar.gz

$cd snort-2.9.6.1

$ sudo ./configure –prefix=/usr/local/snort –enable-sourcefire

$ sudo make

$ sudo make install

$ sudo mkdir /var/log/snort

$ sudo mkdir /var/snort

$ sudo groupadd snort

$ sudo useradd -g snort snort

$ sudo chown snort:snort /var/log/snort

Download the Latest Snort Rules: 

The next step is to download the latest Snort ruleset. You’ll need to log into the Sourcefire site in a browser in order to get the file. The latest rules are located here: https://www.snort.org/snort-rules

There are two sections on this page – one for VRT subscribers and one for registered users. The only difference is that the registered user rule files are 30 days older than those for subscribers.

Download this file to your IDS machine: snortrules-snapshot-2960.tar.gz. 

Open a command prompt in the directory where you downloaded the Snort ruleset file and issue the following commands: 

$ sudo tar zxf snortrules-snapshot-2960.tar.gz -C /usr/local/snort

$ sudo mkdir /usr/local/snort/lib/snort_dynamicrules

$ sudo cp /usr/local/snort/so_rules/precompiled/Ubuntu-12-4/x86-64/2.9.5.3/* /usr/local/snort/lib/snort_dynamicrules

$ sudo touch /usr/local/snort/rules/white_list.rules

$ sudo touch /usr/local/snort/rules/black_list.rules

$ldconfig -v

Now we need to edit the snort.conf configuration file:

 $ sudo vi /usr/local/snort/etc/snort.conf

 var WHITE_LIST_PATH /usr/local/snort/rules

var BLACK_LIST_PATH /usr/local/snort/rules 

dynamicpreprocessor directory /usr/local/snort/lib/snort_dynamicpreprocessor/

dynamicengine /usr/local/snort/lib/snort_dynamicengine/libsf_engine.so

dynamicdetection directory /usr/local/snort/lib/snort_dynamicrules 

$output unified2: filename merged.log, limit 128, nostamp, mpls_event_types, vlan_event_types output unified2: filename snort.u2, limit 128

:wq!

 Download and Install Barnyard2: 

Barnyard2 improves the efficiency of Snort by reducing the load on the main detection engine. It reads Snort’s unified logging output files and enters them into a database. If the database is unavailable Barnyard will input all data when the database comes back online so no alerts will be lost. 

$ sudo git clone  http://github.com/firnsy/barnyard2.git barnyard 

$cd barnyard2

$ sudo autoreconf -fvi -I ./m4

$ sudo ./configure –with-mysql –with-mysql-libraries=/usr/lib/x86_64-linux-gnu

$ sudo make

$ sudo make install

$ sudo cp etc/barnyard2.conf /usr/local/snort/etc

$ sudo mkdir /var/log/barnyard2

$ sudo chmod 666 /var/log/barnyard2

$ sudo touch /var/log/snort/barnyard2.waldo

$ sudo chown snort.snort /var/log/snort/barnyard2.waldo 

We need to create the MySQL database and the database schema. Tis will need the MySQL password that was created earlier: 

$echo “create database snort;” | mysql -u root -p 

$ sudo mysql -u root -p -D snort < ./schemas/create_mysql 

Next create an additional MySQL user for Snort to use as it’s not a good idea to run the daemon as root. Remember the password that you enter below. Also please note the single quotes around the password in addition to the double quotes around the entire echo statement: 

$echo “grant create, insert, select, delete, update on snort.* to snort@localhost identified by ‘bhuvi'” | mysql -u root -p 

Modify the Barnyard2 configuration file with the following command: 

$vi /usr/local/snort/etc/barnyard2.conf 

config  reference_file: /usr/local/snort/etc/reference.config

config  classification_file: /usr/local/snort/etc/classification.config

config  gen_file: /usr/local/snort/etc/gen-msg.map

config  sid_file: /usr/local/snort/etc/sid-msg.map

config hostname: localhost

config interface: eth0

output database: log, mysql,

 Testing Snort:

 You can test to see if Snort will run by using this command: 

$ sudo /usr/local/snort/bin/snort -u snort -g snort -c /usr/local/snort/etc/snort.conf -i eth0

A message saying “Commencing packet processing.” should be visible. You can cancel it by hitting Control-C. If it fails to initialize please see the forums at snort.org to determine the problem. It will usually be something in the configuration file. 

To set Snort to start automatically on your machine edit the rc.local file with the following command:

 $sudo vi /etc/rc.local

ifconfig eth0 up

/usr/local/snort/bin/snort -D -u snort -g snort -c /usr/local/snort/etc/snort.conf -i eth0 /usr/local/bin/barnyard2 -c /usr/local/snort/etc/barnyard2.conf -d /var/log/snort -f snort.u2 -w /var/log/snort/barnyard2.waldo -D

:wq!

 Save the file and exit. Then either reboot or use the following command to start Snort:

 $sudo /etc/init.d/rc.local start

 Monitoring

Download and Set up Snort Report: The next step is to download and configure Snort Report. It’s available at http://www.symmetrixtech.com under the downloads section.

 

 At the time of authoring this the current version was 1.3.4. Download snortreport-1.3.4.tar.gz to a directory on your IDS machine.Open a command prompt in the directory to which you downloaded Snort Report and issue the following 

$sudo tar zxvf snortreport-1.3.4.tar.gz -C /var/www/html 

Now we need to modify the Snort Report configuration file to reflect your MySQL login info and location of the jpgraph libraries. Change the file by editing srconf.php with this command: 

$sudo vi /var/www/snortreport-1.3.4/srconf.php

$pass = “bhuvi”;

 

Install JPGraph:

 

 

$cd /var/www/html

$sudo wget  http://jpgraph.net/download/download.php?p=5

$sudo tar xvzf jpgraph-3.5.0b1.tar.gz

$sudo rm -rf jpgraph-3.5.0b1.tar.gz

 Installing ADODB: 

$cd /var/www/html

$sudo wget  http://kaz.dl.sourceforge.net/project/adodb/adodb-php5-only/adodb-518-for-php5/adodb518a.tgz

$sudo tar xzf adodb518a.tgz

$sudo rm adodb518a.tgz

 Installing and configuring Acid: 

$cd /var/www/html

$sudo wget  http://acidlab.sourceforge.net/acid-0.9.6b23.tar.gz

$sudo tar xzf acid-0.9.6b23.tar.gz

$sudo rm acid-0.9.6b23.tar.gz

 

$sudo vi /var/www/html/acid/acid_conf.php 

$DBlib_path = “/var/www/html/adodb518a”; 

$alert_dbname = “snort”;

$alert_host = “localhost”;

$alert_port = “”;

$alert_user = “snort”;

$alert_password = “bhuvi”;

$archive_dbname = “snort”;

$archive_host = “localhost”;

$archive_port = “”;

$archive_user = “snort”;

$archive_password = “bhuvi”;

 $ChartLib_path = “/var/www/html/jpgraph-3.5.0b1/src”;

 :wq!

Start Apache then go to http://yourhost/acid/acid_main.php . You will get a message that looks like this in your browser:

 

 Please Click the button that says “Create Acid AG” 

 Now browse to acid main page , it will show record details of snort…we are done!

 

 

Thursday, 16 October 2014

Data Center Design: Example Overview Part-II:

Virtual Device Context (VDC) Design

Continuing from Part-I, in the main DC, the Core, DC Aggregation, and DCI Modules would be deployed on the same physical devices – the Nexus 7010 Core Switches. The separation between these modules would be done using the Virtual Device Context (VDC) feature of the Nexus 7000 devices. The main DC and main office building core switches were defined to have the following VDCs:

·         Core VDC

·         DC-AGG VDC

·         OTV VDC

In  the main office building, the Core, DC Aggregation, and DCI modules would be deployed on the same physical devices – the Nexus 7009 Core Switches. The main office building core switches will therefore have the following VDCs:

·         Core VDC

·         DC-AGG VDC

·         OTV VDC

 

VDC Overview

Starting with Supervisor 2/Cisco NX-OS 6.1, the VDC feature allows up to four (4) separate virtual switches, and one (1) Admin VDC to be configured within a single Nexus 7000. The Admin VDC is optional and the creation of the admin VDC is not a required task for device operations.

Architecturally the VDCs run on top of a single NX-OS kernel and OS infrastructure. Each VDC

represents a separate instance of the control plane protocols, as illustrated below:

 

VDC Independence

A VDC runs as a separate logical entity within the physical device, maintains its own unique set of running software processes, has its own configuration, and can be managed by a separate administrator.

 

VDCs virtualize the control plane as well, which includes all those software functions that are processed by the CPU on the active supervisor module. The control plane supports the software processes for the services on the physical device, such as the routing information base (RIB) and the routing protocols. When a VDC is created, the Cisco NX-OS software takes several of the control plane processes and replicates them for the VDC. This replication of processes allows VDC administrators to use virtual routing and forwarding instance (VRF) names and VLAN IDs independent of those used in other VDCs. Each VDC administrator essentially interacts with a separate set of processes, VRFs, and VLANs.

 

All the Layer 2 and Layer 3 protocol services run within a VDC. Each protocol service started within a DC runs independently of the protocol services in other VDCs. The infrastructure layer protects the protocol services within VDC so that a fault or other problem in a service in one VDC does not impact other VDCs. The Cisco NX-OS software creates these virtualized services only when a VDC is created.

 

Each VDC has its own instance of each service. These virtualized services are unaware of other

VDCs and only work on resources assigned to that VDC. Only a user with the network-admin role

can control the resources available to these virtualized services.

 

Although CPU resources (Supervisor module) are not truly independent between the VDCs, the pre-emptive multi-tasking nature of OS does ensure that no single process can hog the CPU, including processes across VDCs. Even if CPU usage is driven up, all processes will have fair access to CPU clock cycles in order to function.

 

Memory is not controlled on a per VDC basis, instead it is controlled at a per process level. The services that run on the platform have limits set to their maximum available accessible memory that is enforced by the kernel. Therefore, any single process can only access the defined amount to an instance of that process. This prevents an errant process or memory leak in a process from consuming a significant amount of overall system memory.

 

There are additional resources that are applied system wide (allocated to all VDCs as a whole), thereby offering lesser amount operational independency between the VDCs. These global resources are discussed in the following section.

 

Default VDC

The physical device always has one VDC, the default VDC (VDC 1). When a user first logs into a new Cisco NX-OS device, it goes to the default VDC. User must be in the default VDC to create, change attributes for, or delete a non-default VDC. As mentioned earlier, the Cisco NX-OS software can support up to four VDCs (and 1 Admin VDC), including the default VDC. This means that a user can create up to three VDCs. If the user has the network-admin role privileges, physical device and all VDCs can be managed from the default VDC.

VDC login Process

 

VDC Resources

In respect of allocation to VDCs, Nexus switch resources are divided into three main categories:

Global, Dedicated and Shared. VDC1 is the default VDC, and controls the creation, deletion and

resource allocation of all other VDCs.

 

Global Resources are assigned to and controlled by the default VDC:

• Boot image configuration

• Software feature licensing

• Ethanalyzer session

• Control Plane Policing (CoPP) configuration

• Quality of Service (QoS) queuing configuration

• Allocation of resources to other VDCs

• Console port

• Connectivity Management Processor (CMP)

• Network Time Protocol (NTP) Server configuration

• Port channel hashing algorithm configuration

 

Every VDC has its own dedicated resources, which are assigned solely to that VDC:

• Physical interfaces

• Layer 3 and layer 2 protocol stacks

• Per-VDC management configuration

 

Shared resources are available to all VDCs:

• Out of band management interface (interface mgmt0). Each VDC has its own IP address on this interface.

 

NX-OS allows allocation of the following resources to be controlled by the configuration of minimum and maximum resource guarantees:

• VLAN

• SPAN sessions

• VRFs

• Port channels

• Route memory

 

 

Route Memory

While there is no exact mapping between the number of routes that can be stored per megabyte of memory allocated, 16 megabytes of route memory is enough to store approximately 11,000 routes with 16 next hops each. The default memory allocation to IPv4 routes is 58 megabytes.

 

The following command is helpful to understand the memory consumption for unicast routing tables. Using the show routing memory estimate command on the Nexus 7000, it can be seen that a VDC using the default resource template will support approximately 12,000 routes (assuming 8 next-hops for each route):

 

!

Switch# sh routing memory estimate route 12000 next-hops 8

 

Shared memory estimates:

Current max 8 MB; 6526 routes with 16 nhs

in-use 1 MB; 97 routes with 1 nhs (average)

Configured max 8 MB; 6526 routes with 16 nhs

Estimate 8 MB; 12000 routes

!

It should also be noted that the route memory resource allocation does not permit to use different

values for the maximum and minimum memory limits.

To allow a VDC to communicate with other devices in the network, the administrator must explicitly allocate interfaces to a VDC, except for the default VDC that will have control over all interfaces that are not otherwise allocated.

 

At the time of authoring this, for the relevant NX-OS software release, the F2e modules needed to be in their own dedicated VDC with no other module types being part of the same VDC. This restriction was removed in the next NX-OS release (6.2).

 

Allocation of interfaces should be done based on the type of hardware in use. For example a ‘M1-Series’ 32-port 10G line card doesn’t allow ports belonging to the same port-group to be part of different VDCs. A dedicated line card based approach for VDCs offers multiple benefits with the only trade-off being the consumption of extra-line cards:

• Efficient usage of Line Card hardware resources like MAC Table, FIB TCAM. The overall chassis level MAC Table/FIB TCAM limit can be scaled to 3 times 128K limit.

• Brings much closer to a secure architecture by limiting the dependency between the VDC environments.

Note: There is no port group restriction on M2 Series modules. Any port in M2 Series modules can be placed in any VDC.

 

Communication Between VDCs

The Cisco NX-OS software does not support direct communication between VDCs on a single

physical device. There must be a physical connection from a port allocated to one VDC to a port

allocated to the other VDC to allow the VDCs to communicate. Each VDC has its own VRFs for

communicating with other VDCs.

 

Customer Data Center building Network VDC Design Summary

The following points summarize the VDC design for the main Data Center building Network:

·         Core Nexus 7010 switches will be configured with three VDCs:

o (Default) VDC for Core module (Core VDC)

o VDC for DC Aggregation module (DC-AGG VDC)

o VDC for DCI module (OTV VDC)

·         Each VDC will be allocated dedicated physical interfaces:

o The M1L linecards will be allocated to the Core VDC

o The M2 linecard will have most of its ports in the Core VDC

o The F2e line cards (2 on each switch) will be allocated to the DC-AGG VDC

o Three ports from the M2 line cards will be allocated to the OTV VDC

·         Core VDC in each Nexus switch will have two point-to-point Tengigabit Ethernet Layer-3 links, one to the local DC-AGG VDC and the second to the DC-AGG VDC in the second Core switch.

·         Core VDC in each Nexus switch will have one point-to-point Tengigabit Ethernet Layer-3 link to the local OTV VDC.

·         Core VDC in each Nexus switch will have two point-to-point Tengigabit Ethernet Layer-3 links, one to each Distribution switch. As the Distribution switches will be deployed using VSS the Core VDC will only see the same OSPF neighbor across both links.

·         The links between the Core VDC and the WAN Edge routers will be configured as point-topoint Layer-3 links.

·         The inter DC links will be terminated on the Core VDC, one link on each switch. These will be configured as point-to-point Layer-3 links with a /29 subnet.

·         The DC-AGG VDC in each Core switch will be dual homed to the two Core VDCs via pointto-point Layer-3 links.

·         The DC-AGG VDC in each Core switch will have separate vPC interconnects with the both the local OTV VDC as well as the OTV VDC on the second switch. This will be an 802.1q trunk carrying VLANs requiring Layer-2 extension.

·         Core firewalls will also be connected to the DC-AGG VDCs via vPC. This will be an 802.1q trunk carrying all DC VLANs.

·         F5 load balancers will be connected to the DC-AGG VDCs via vPC. This port-channel will be made part of the respective VLAN as an access switchport.

·         Two ports will be interconnecting the OTV VDC with the Aggregation layer of the Data Center as follows:

o One optical Tengigabit Ethernet link will be connected to the local DC-AGG VDC

o A second optical Tengigabit Ethernet link will be connected to the DC-AGG VDC in the other Core switch

·         The VDCs will use the default VDC resource allocation template.

 

In order to provide an acceptable level of resiliency in the DC-AGG VDC it was decided that one F2e module will be added to each Core switch. This means a total of two F2e modules will be deployed and these will be obtained by removing one F2e module from each of the Aggregation switches in Row 7 of the Data Center.

 

Cutomer main building Network VDC Design Summary:

The following points summarize the proposed VDC design for main office building Network:

·         Core Nexus 7009 switches will be configured with three VDCs:

o (Default) VDC for Core module (Core VDC)

o VDC for DC Aggregation module (DC-AGG VDC)

o VDC for DCI module (OTV VDC)

·         Each VDC will be allocated dedicated physical interfaces:

o The M1L linecards will be allocated to the Core VDC

o The M2 linecard will have most of its ports in the Core VDC

o The F2e line cards (1 on each switch) will be allocated to the DC-AGG VDC

o Three ports from the M2 line cards will be allocated to the OTV VDC

·         Core VDC in each Nexus switch will have two point-to-point Tengigabit Ethernet Layer-3  links, one to the local DC-AGG VDC and the second to the DC-AGG VDC in the second Core switch.

·         Core VDC in each Nexus switch will have one point-to-point Tengigabit Ethernet Layer-3 link to the local OTV VDC.

·          Core VDC in each Nexus switch will have two point-to-point Tengigabit Ethernet Layer-3 links, one to each Distribution switch.

·         The inter DC links will be terminated on the Core VDC, one link on each switch. These will be configured as point-to-point Layer-3 links with a /29 subnet.

·         The DC-AGG VDC in each Core switch will be dual homed to the two Core VDCs via point-to-point Layer-3 links.

·         The DC-AGG VDC in each Core switch will have separate vPC interconnects with the both the local OTV VDC as well as the OTV VDC on the second switch. This will be an 802.1q trunk carrying VLANs requiring Layer-2 extension.

·         Two ports will be interconnecting the OTV VDC with the Aggregation layer of the Data Center as follows:

o One optical Tengigabit Ethernet link will be connected to the local DC-AGG VDC

o A second optical Tengigabit Ethernet link will be connected to the DC-AGG VDC in the other Core switch

·         The VDCs will use the default VDC resource allocation template.

 

Using The Default VDC

Generally it is recommended to dedicate the default VDC as an ‘Admin’ VDC and not run any data path traffic through it. This is mainly because of the fact that this VDC is used to create new VDCs and allocate resources to them as well as to manage configuration of those resources that can only be managed from the default VDC (e.g. CoPP configuration). Access to the default VDC means ability to create/modify existing VDCs.

 

If any environment uses the Default VDC for data traffic, then their management should ensure that operational resources accessing the Core VDC for normal operations are allocated the ‘vdcadmin’ role and not the ‘network-admin’ role.

 

VDC Naming Convention:

• (Default) VDC for Core network (Core VDC)

• VDC for DC Aggregation network (DC-AGG VDC)

• VDC for OTV (OTV VDC)

 

The name of Core VDC is the hostname of the switch and consequently there is no explicit configuration required for naming the Core VDC.

 

The DC Aggregation VDC and the OTV VDC would be created using the following names: DC-AGG and OTV respectively.

 

By default the hostname of the non-default VDCs is the hostname of the switch with the VDC name tagged onto the end. This means, that by default, the hostname displayed for the OTV VDC will a combination of the Core VDC hostname and ‘OTV’ tagged onto the end. This behavior can be modified to have only “DCAGG” or “OTV” displayed when logging into this VDC, using the following configuration template:

 

!

configure terminal

no vdc combined-hostname

!

 

VDC Interface Allocation

By default all interfaces on a Nexus 7000 are part of the Default VDC. For this specific project, the Default VDC was planned to be used for the Core module so there was no requirement to explicitly allocate interfaces to the Core module VDC as it would be the Default VDC.

 

The M1L and M2 line cards would have most of their ports allocated to the Core VDC. Three ports from the M2 line cards would be allocated to the OTV VDC. The F2e line cards (2 on each Core switch in main DC, 1 on each Core switch in main office building) would be allocated to the DC-AGG VDC.

 

For the DC-AGG and OTV VDCs, since they will be new VDCs (VDC 2 and 3 respectively), interfaces would need to be allocated explicitly. The Core Nexus 7010/7009 would be installed with M1, M2, and F2 linecards. The F2 linecards need to be installed in a VDC of their own. Therefore, if all linecards are installed and device is booted for the first time, by default, the N7K boots with the M linecards in the default VDC and the F2 linecards disabled.

 

Shared Management Interface

The Nexus 7000 is equipped with a dedicated Management interface port on each Supervisor engine. Only the port on the active Supervisor is available. By default, this management interface resides within a special ‘management’ VRF, which is completely separate from the default VRF and any other VRFs that may be created. It is not possible to move the management Ethernet port to any other VRFs, nor to assign other system ports to management VRF. Because of the existence of the dedicated management VRF, management Ethernet port cannot be used for data traffic. The management Ethernet port cannot be trunked as well.

One feature of the Supervisor management interface is that it exists within all VDCs (rather than a single VDC only). The management interface also carries a unique IP address in every VDC. In this way, a distinct management IP address can be provided for administration of a single VDC.

 

In this case, the management interface on the Nexus 7010/7009 Core switches was to be used for their management. Each VDC would have its own IP Address for the management interface.

 

VDC License Requirements: Creating non-default VDCs requires a special license.

 

VDC Configuration

VDC creation has the following prerequisites:

Login to the default VDC with a username that has the network-admin user role.

Make sure the appropriate license is installed.

Assign a name for the VDC.

Allocate resources available on the physical device to the VDCs.

Configure an IPv4 or IPv6 address to use for connectivity to the VDC.

 

Creating a VDC Resource Template

This step was not required for the VDC configuration in the customer's networks as the default resource template would be used. However, if required in the future, the following steps were recommended to be used to define a new resource template:

 

!

vdc resource template <Template-Name>

limit-resource vlan minimum 16 maximum 4094

limit-resource monitor-session minimum 0 maximum 2

limit-resource vrf minimum 16 maximum 500

limit-resource port-channel minimum 16 maximum 256

limit-resource u4route-mem minimum 16 maximum 128

limit-resource u6route-mem minimum 4 maximum 4

limit-resource m4route-mem minimum 8 maximum 8

limit-resource m6route-mem minimum 2 maximum 2

!

Creating a VDC

Below example shows how to create the DC-AGG VDC and OTV VDCs on the Nexus 7010 switches. The same template can be used to create VDCs in 7009 switches:

!

Nexus01DC# configure terminal

Nexus01DC(config)# vdc DC-AGG

Nexus01DC(config)# vdc OTV

!

 

Allocating Interfaces to a VDC

The following example shows how to allocate resources to a VDC:

 

!

Nexus01DC# configure terminal

Nexus01DC(config)# vdc <VDC-Name>

Nexus01DC(config-vdc)# allocate interface ethernet <slot/port>

! Allocate interfaces as needed

!

 

Applying Resource Template to a VDC

Below shows how to apply a resource template to a VDC:

!

Nexus01DC(config)# vdc <VDC-Name>

Nexus01DC(config-vdc)# template <Template-Name>

!

Initializing a New VDC

A newly created VDC is much like a new physical device. To access a VDC, a user must first initialize it. The initialization process includes setting the VDC admin user account password and optionally running the setup script. The setup script helps perform basic configuration tasks such as creating more user accounts and configuring the management interface.

 

The VDC admin user account in the non-default VDC is separate from the network admin user account in the default VDC. The VDC admin user account has its own password and user role.

!

Nexus01DC# switchto vdc DC-AGG

!

 

VDC resource limits can be changed by applying a new VDC resource template anytime after the VDC creation and initialization. Changes to the limits take effect immediately except for the IPv4 and IPv6 route memory limits, which take effect after the next VDC reset, physical device reload, or physical device stateful switchover.

 

VDC configuration should be done prior to applying any other configuration to the Nexus 7000 switches.

 

Saving VDC Configuration

From the VDC, a user with the vdc-admin or network-admin role can save the VDC configuration to the startup configuration. However, a user can also save the configuration of all VDCs to the startup from the default VDC.

!

switchto vdc OTV

copy running-config startup-config

switchback

copy running-config startup-config vdc-all

!

 

To be continued…Next: Fabric Path in the DC and Cloud Infrastructure considerations

 

Breakfast At Serengeti

Breakfast At Serengeti
Lion's Share

The Ngorongoro Family

The Ngorongoro Family
Click on the Picture Above To Make It Larger

Tabloid Time: The Aliens Are a'Landing ?!.. ;-)

At the risk of being ridiculed and being labelled a freak, I shall like to draw everyone's attention to the following recent events....If you watch the videos then turn on the sound for the commentary...



Fireball across Ausin, Texas (16th Feb 2009). According to BBC, apparently, its NOT debris from a recent satellite collision...:
http://news.bbc.co.uk/1/hi/world/7891912.stm
http://us.cnn.com/2009/US/02/15/texas.sky.debris/index.html

Same in Idaho in recent times. NO meteor remains found yet: http://news.bbc.co.uk/1/hi/sci/tech/7744585.stm

Exactly same in Sweden: http://news.bbc.co.uk/1/hi/world/europe/7836656.stm?lss

This was recorded on 25th Feb 2007 in Dakota, US:
http://www.youtube.com/watch?v=cVEsL584kGw&feature=related

This year has seen three of the spookiest UFO videos surface, with people in India, Mexico and even in space, NASA, spotting things they couldn't explain: http://www.youtube.com/watch?v=7WYRyuL4Z5I&feature=related

CHECK out this one on 24th Januray, 2009 in Argentina close to Buenos Aires:
You tube: www.youtube.com/
Press:
Press Coverage

AND Lastly, and more importantly, from Buzz Aldrin on Apollo 11 : http://www.youtube.com/watch?v=XlkV1ybBnHI

Heh?! Don't know how authentic these news are... don't even know if these are UFO's or meteors or ball lightning or something else. But, if meteors, then where are the meteorites ? However, I see no reason why life cannot exist in other planets and why they could not be sneaking around here :-) . I for one, have long suspected some of my relations to be space aliens or at least X-people from X-files :-)

I am waiting for a job on an Alien spaceship myself. :-)


Giraffes in Parallel Universe

Giraffes in Parallel Universe
At Lake Manyara

Serengeti Shall Never Die

Serengeti Shall Never Die
Wildebeeste Calf starts running only 5 min. after being born. CLICK on the pitcture to view Slideshow