Monday, 10 November 2014

Cisco ACI Fabric & Data Center Design Basics-Part I

Starting point of designing a Data Center (DC)?


Main query: Is there going to be virtualization or not? In a modern day DC, virtualization is not only a must but “ON” by almost default. If the DC is going to be virtualized, then is there requirement for a Layer2 fabric with an end-to-end protocol (such as Fabric Path or VXLAN) or, is the requirement more for a Layer3 fabric with IP.


Applications? -


·         Fabric based: Virtualized DC , for example, with Fabric Path or VXLAN

·         IP Based:

§  Big Data

§  Financial

§  Massive Scale

·         Non-ip based: High Performance Computing (HPC) such as Ethernet, RDMA

                          over Ethernet etc.


Hence, main thing to address are the applications. What are the applications that are going to run in the DC? The first and foremost important thing in a virtualized DC are the applications running on virtual machines. From a high level point of view, there are two main groups of applications: IP based and non-IP based.


Examples of IP Based DCs are as follows:


·         Big Data: Big Data has a distributed workload with huge amount of east-west traffic to process all the analytics that the Big data process requires.


·         Financial or Ultra-Low-Latency (ULL) applications: Typically, in a financial environment, latency between two points are critical. Nominal switch latency is required. It is necessary to reduce the number of devices in the network because each device introduces a little bit of latency. Based on the ULL requirements, this can be a 2-tier, 3-tier or, even a collapsed single-tier network environment with 2 TOR switches with one switch for redundancy.


·         Massive Scale Applications: Typically found in very large DCs with numerous different clusters.


Non-IP based application: High Performance Computing HPC or (Ethernet, RDMA over Ethernet, i.e., Layer 2 fabric)


Storage?: Next query in the line, after applications, is what kind of storage is being used? Is it going to be fiber channel (FC) or IP in the DC? Or will it be FCoE or IP? Will the storage be centralized or distributed? The devices and the design of the DC will typically depend on answers to the storage and SAN types.


Physical Constraints: Power, cooling, space (urbanism). Will there be larger modular chassis or smaller size devices? Will there be POD’s with Flexpods which are typically one logical unit running all the different applications that are critical for an elevated user experience. In a POD environment, the portal that the user connect to, connects, in turn, to a DC. One POD is a number of servers, storage and switches. Typical best practice is to start small with a POD.


Application Centric Infrastructure (ACI) Fabric Overview


Cisco Application Centric Infrastructure (ACI) fabrics use the Cisco Nexus 9000 Series Switches as the core of the transport system. The Cisco Nexus 9000 Series was designed from the foundation to meet the rapidly changing requirements of data center networks, while enabling the advanced capabilities of Cisco ACI. At the physical layer, the Cisco ACI fabric consists of a leaf-and-spine design, or Clos network. This design is well suited to the east-west traffic patterns of modern data centers, moving traffic between application tiers or components. Figure 1 shows a typical spine-and-leaf design.



As shown in Figure 1, with this design each leaf connects to each spine, and no connections are created between pairs of leafs or pairs of spines. Leaf switches are used for all connectivity outside the fabric, including servers, service devices, and other networks such as intranets and the Internet.


With this architecture a spine switch provides cross-sectional bandwidth between leaf switches, plus additional redundancy. Bandwidth is determined by the number of spines and number of links to each spine. Redundancy is dictated by the amount of bandwidth lost in the event of a spine failure. In the topology in the figure, a single spine failure would reduce overall bandwidth and paths by 25 percent because of the use of four spines.


Above this physical layer Cisco ACI uses a controller, the Cisco Application Policy Infrastructure Controller (APIC), to manage the data center network and its policy centrally. Cisco APIC not only provides central management and automation, but also a policy model that maps application requirements directly onto the network as a cohesive system for application delivery. For more information about Cisco APIC, see .


This model provides full automation for the deployment and management of applications end to end, including Layer 4 through 7 policy, providing a single policy design, deployment, and monitoring point for applications. For more information about Cisco ACI and Layer 4 through 7 services, see .


The Cisco Application Policy Infrastructure Controller (APIC) API enables applications to directly connect with a secure, shared, high-performance resource pool that includes network, compute, and storage capabilities. The following figure provides an overview of the APIC.

The Cisco Application Centric Infrastructure Fabric (ACI) fabric includes Cisco Nexus 9000 Series switches with the APIC to run in the leaf/spine ACI fabric mode. These switches form a “fat-tree” network by connecting each leaf node to each spine node; all other devices connect to the leaf nodes. The APIC manages the ACI fabric. The recommended minimum configuration for the APIC is a cluster of three replicated hosts.

The APIC manages the scalable ACI multitenant fabric. The APIC provides a unified point of automation

and management, policy programming, application deployment, and health monitoring for the fabric. The APIC, which is implemented as a replicated synchronized clustered controller, optimizes performance, supports any application anywhere, and provides unified operation of the physical and virtual infrastructure. The APIC enables network administrators to easily define the optimal network for applications. Data center operators can clearly see how applications consume network resources, easily isolate and troubleshoot application and infrastructure problems, and monitor and profile resource usage patterns.


APIC fabric management functions do not operate in the data path of the fabric. The following figure shows an overview of the leaf/spin ACI fabric.


The ACI fabric provides consistent low-latency forwarding across high-bandwidth links (40 Gbps, with a 100-Gbps future capability). Traffic with the source and destination on the same leaf switch is handled locally, and all other traffic travels from the ingress leaf to the egress leaf through a spine switch. Although this

architecture appears as two hops from a physical perspective, it is actually a single Layer 3 hop because the fabric operates as a single Layer 3 switch.


The ACI fabric object-oriented operating system (OS) runs on each Cisco Nexus 9000 Series node. It enables programming of objects for each configurable element of the system.


The ACI fabric OS renders policies from the APIC into a concrete model that runs in the physical infrastructure. The concrete model is analogous to compiled software; it is the form of the model that the switch operating system can execute. The figure below shows the relationship of the logical model to the concrete model and the switch OS.


All the switch nodes contain a complete copy of the concrete model. When an administrator creates a policy in the APIC that represents a configuration, the APIC updates the logical model. The APIC then performs the intermediate step of creating a fully elaborated policy that it pushes into all the switch nodes where the concrete model is updated.

Note: The Cisco Nexus 9000 Series switches can only execute the concrete model. Each switch has a copy of the concrete model. If the APIC goes off line, the fabric keeps functioning but modifications to the fabric policies are not possible.

The APIC is responsible for fabric activation, switch firmware management, network policy onfiguration, and instantiation. While the APIC acts as the centralized policy and network management engine for the fabric, it is completely removed from the data path, including the forwarding topology. Therefore, the fabric can still forward traffic even when communication with the APIC is lost.


The Cisco Nexus 9000 Series switches offer modular and fixed 1-, 10-, and 40-Gigabit Ethernet switch configurations that operate in either Cisco NX-OS stand-alone mode for compatibility and consistency with the current Cisco Nexus switches or in ACI mode to take full advantage of the APIC's application policy-driven services and infrastructure automation features.


ACI Fabric Behaviour

The ACI fabric allows customers to automate and orchestrate scalable, high performance network, compute and storage resources for cloud deployments. Key players who define how the ACI fabric behaves include the following:

·         IT planners, network engineers, and security engineers

·         Developers who access the system via the APIC APIs

·         Application and network administrators

The Representational State Transfer (REST) architecture is a key development method that supports cloud computing. The ACI API is REST-based. The World Wide Web represents the largest implementation of a system that conforms to the REST architectural style.


Cloud computing differs from conventional computing in scale and approach. Conventional environments include software and maintenance requirements with their associated skill sets that consume substantial operating expenses. Cloud applications use system designs that are supported by a very large scale infrastructure that is deployed along a rapidly declining cost curve. In this infrastructure type, the system administrator, development teams, and network professionals collaborate to provide a much higher valued contribution. In conventional settings, network access for compute resources and endpoints is managed through virtual LANs (VLANs) or rigid overlays, such as Multiprotocol Label Switching (MPLS), that force traffic through rigidly defined network services such as load balancers and firewalls.


The APIC is designed for programmability and centralized management. By abstracting the network, the ACI fabric enables operators to dynamically provision resources in the network instead of in a static fashion. The result is that the time to deployment (time to market) can be reduced from months or weeks to minutes. Changes to the configuration of virtual or physical switches, adapters, policies, and other hardware and software components can be made in minutes with API calls.


The transformation from conventional practices to cloud computing methods increases the demand for flexible and scalable services from data centers. These changes call for a large pool of highly skilled personnel to enable this transformation. The APIC is designed for programmability and centralized management. A key feature of the APIC is the web API called REST. The APIC REST API accepts and returns HTTP or HTTPS messages that contain JavaScript Object Notation (JSON) or Extensible Markup Language (XML) documents. Today, many web developers use RESTful methods. Adopting web APIs across the network enables enterprises to easily open up and combine services with other internal or external providers. This process transforms the network from a complex mixture of static resources to a dynamic exchange of services on offer.


ACI Fabric

The ACI fabric supports more than 64,000 dedicated tenant networks. A single fabric can support more than one million IPv4/IPv6 endpoints, more than 64,000 tenants, and more than 200,000 10G ports. The ACI fabric enables any service (physical or virtual) anywhere with no need for additional software or hardware gateways to connect between the physical and virtual services and normalizes encapsulations for Virtual Extensible Local Area Network (VXLAN) / VLAN / Network Virtualization using Generic Routing Encapsulation (NVGRE).


The ACI fabric decouples the endpoint identity and associated policy from the underlying forwarding graph. It provides a distributed Layer 3 gateway that ensures optimal Layer 3 and Layer 2 forwarding. The fabric supports standard bridging and routing semantics without standard location constraints (any IP address anywhere), and removes flooding requirements for the IP control plane Address Resolution Protocol (ARP) / Generic Attribute Registration Protocol (GARP). All traffic within the fabric is encapsulated within VXLAN.


Decoupled Identity and Location

The ACI fabric decouples the tenant endpoint address, its identifier, from the location of the endpoint that is defined by its locator or VXLAN tunnel endpoint (VTEP) address. The following figure shows decoupled identity and location.


Forwarding within the fabric is between VTEPs. The mapping of the internal tenant MAC or IP address to a location is performed by the VTEP using a distributed mapping database.


Policy Identification and Enforcement

An application policy is decoupled from forwarding by using a distinct tagging attribute that is also carried in the VXLAN packet. Policy identification is carried in every packet in the ACI fabric, which enables consistent enforcement of the policy in a fully distributed manner. The following figure shows  identification.


Fabric and access policies govern the operation of internal fabric and external access interfaces. The system automatically creates default fabric and access policies. Fabric administrators (who have access rights to the entire fabric) can modify the default policies or create new policies according to their requirements. Fabric and access policies can enable various functions or protocols. Selectors in the APIC enable fabric administrators to choose the nodes and interfaces to which they will apply policies.


Encapsulation Normalization

Traffic within the fabric is encapsulated as VXLAN. External VLAN/VXLAN/NVGRE tags are mapped at ingress to an internal VXLAN tag. The following figure shows encapsulation normalization.

Forwarding is not limited to or constrained by the encapsulation type or encapsulation overlay network. External identifiers are localized to the leaf or leaf port, which allows reuse or translation if required. A bridge domain forwarding policy can be defined to provide standard VLAN behavior where required.




Migration from Existing Infrastructure

Most existing networks are not built with a spine-and-leaf design, but may consist of various disparate devices typically configured in a three-tier architecture using a core-layer, aggregation-layer, and access-layer topology as shown in the following figure.


In this design, a single pair of switches is used at the aggregation layer and at the core layer, to provide redundancy for failure events. No more than two switches or routers are used at these tiers because of traditional Spanning Tree protocol constraints, which cause redundant links to be blocked, therefore negating the benefits of adding more devices.


In this model, the leaf switches are responsible for server connectivity and are then redundantly connected upstream to the aggregation layer. The aggregation provides connectivity between leaf switches and is typically also the point at which Layer 4 through 7 services are inserted. These services can consist of firewalls, load balancers, etc. Additionally, the aggregation layer is often the Layer 3 or routed boundary, or in some cases the core may provide this boundary.


This Layer 3 boundary design again must accommodate traditional Spanning Tree Protocol constraints and the need for Layer 2 adjacency for some server workloads. In addition, in this design the aggregation tier is the policy boundary for data center traffic. VLANs are typically created with one Layer 3 subnet within them. Broadcast traffic is allowed freely between devices within that subnet or VLAN. Policy (security, quality of service [QoS], services, etc.) is then applied only when traffic is sent to the default gateway to be forwarded between VLANs.


Customer Investment Protection

These topologies have been fairly standard for years. Therefore, customers have a large investment in the networking equipment that is in place. Other than in a completely new, greenfield environment, implementation of an all-new leaf-and-spine design will not be an option. Additionally, major changes to the existing physical or logical topology are typically not welcome because they can induce risk.


Because of these requirements, the design of Cisco ACI fabrics must include both compatibility with existing data center networks and the capability to easily integrate with those networks. The Cisco ACI fabric must be able to be inserted transparently into existing infrastructure while providing the same advantages of policy automation, linear scalability, and application mobility and visibility.


Cisco ACI is designed to provide integration with any existing network in any topology. Layers 2 and 3 can be extended into Cisco ACI, as well as Layer 3 data center overlay technologies such as Virtual Extensible LAN (VXLAN). Beyond these, the topology design and integration points must be carefully considered.


Because Cisco ACI focuses primarily on the design, automation, and enforcement of policy, the aggregation tier is the most logical insertion point for the Cisco ACI fabric. As stated earlier the aggregation tier is already responsible for policy enforcement and typically acts as the Layer 3 boundary. Therefore, traffic is already being moved to that tier for that purpose. Following figure provides a detailed view of the existing traffic pattern that will need to be integrated.

Cisco ACI provides three methods of integrating with existing network infrastructure, these methods are:

1. Cisco ACI Fabric as an Additional Data Center Pod

2. Cisco ACI Fabric as a Data Center Policy Engine

3. Cisco ACI Fabric Extended to Non-Directly Attached Virtual and Physical Leaf Switches


Method-1: Cisco ACI Fabric as an Additional Data Center Pod

This method utilizes a new pod build out to insert the ACI fabric. In this method, existing servers and services will not be modified or changed. ACI will be inserted as an aggregation tier for a new pod build out. This will act the same as attaching a new aggregation tier to an existing core for the purpose of a new pod. Following figure shows the traditional insertion of a new pod.

Traditional Method of Adding a New Data Center Pod

The traditional method to add a pod is to attach a pair of new Aggregation switches to the existing core. New access switches are then connected to this aggregation tier to support new server racks. This method allows for the addition of new servers with the additional stability of separating out the aggregation layer services between pods.


Using this same methodology a new pod can be added to the existing network using an ACI Fabric. Rather than attaching two Aggregation switches and several Access switches a small ACI Spine/Leaf Fabric can be added as shown in the following figure.

Adding a New Data Center Pod Using Cisco ACI

This methodology works in a very similar fashion to the traditional pod addition. Key difference is that with a Spine/Leaf topology everything will connect to the Leaf switches, thus the existing Core is shown connected to the ACI leaf switches, and not to the Spine.


With the physical topology in place, the logical topology will need to be built. Cisco ACI is designed to integrate seamlessly with existing network infrastructure using standard protocols. Connections to outside networks are supported using OSPF, BGP, VxLAN and VLANs. The connection from the ACI leaf switches to the Core switches can be made using any of these as shown in the following figure:

Using Cisco ACI as a new data center pod provides investment protection for existing infrastructure while allowing growth into the benefits of the ACI Fabric. This methodology requires no topology, connectivity, or policy changes for existing workloads while providing a platform for ACI for new applications and services.

Next Up: Understanding the Switch Fabric Architecture….

Courtesy of Cisco Live 2014


…and ACI and OpenStack.


Courtesy of Cisco Live 2014


…and a design example with Ubuntu KVM, ACI and….





Wednesday, 29 October 2014

Snort on Ubuntu 14.04 from Sourcecode with Barnyard, SnortReport, Acid

Snort is an open source network intrusion detection system, capable of performing real-time traffic analysis and packet logging on IP net-works. It can perform protocol analysis, content searching/matching and can be used to detect a variety of attacks  and  probes,  such  as buffer overflows, stealth port scans, CGI attacks, SMB probes, OS fingerprinting attempts, and much more. Snort has three primary uses.  It can be used as a straight packet sniffer like tcpdump , a packet logger  , or as a full blown network intrusion detection/prevention system system. 

Main features introduced in 2.9.6-2.9.7:

·         Added additional support for Heartbleed detection within the SSL preprocessor to improve performance.

·         A new protected_content rule option that is used to match against a content that is hashed.  It can be used to obscure the full context of the rule from the administrator.

·         Protocol Aware Flushing (PAF) improvements for SMTP, POP, and IMAP to more accurately process different portions of email messages and file attachments.

·         Added ability to test normalization behavior without modifying network traffic.  When configured using na_policy_mode:inline-test, statistics will be gathered on packet normalizations that would have occurred, allowing less disruptive testing of inline deployments.

·         Added improved XFF support to HttpInspect. It is now possible to specify custom HTTP headers to use in place of 'X-Forwarded-For'. In situations where traffic may contain multiple XFF-like headers, it is possible to specify which headers hold precedence.

·         The HTTP Inspection preprocessor now has the ability to decompress DEFLATE and LZMA compressed flash content and DEFLATE compressed PDF content from http responses when configured with the new decompress_swf and decompress_pdf options. This enhancement can be used with existing rule options that already match against decompressed equivalents. Feature rich IPS mode including improvements to Stream for inline deployments. Additionally a common active response API is used for all packet responses, including those from Stream, Respond, or React. A new response module, respond3, supports the syntax of both resp & resp2, including strafing for passive deployments. When Snort is deployed inline, a new preprocessor has been added to handle packet normalization to allow Snort to interpret a packet the same way as the receiving host.

·         Use of a Data Acquisition API (DAQ) that supports many different packet access methods including libpcap, netfilterq, IPFW, and afpacket. For libpcap, version 1.0 or higher is now required. The DAQ library can be updated independently from Snort and is a separate module that Snort links. See README.daq for details on using Snort and the new DAQ.

·         Updates to HTTP Inspect to extract and log IP addresses from X-Forward-For and True-Client-IP header fields when Snort generates events on HTTP traffic.

·         A new rule option ‘byte_extract’ that allows extracted values to be used in subsequent rule options for isdataat, byte_test, byte_jump, and content distance/within/depth/offset.

·         Updates to SMTP preprocessor to support MIME attachment decoding across multiple packets.

·          Ability to “test” drop rules using Inline Test Mode. Snort will indicate a packet would have been dropped in the unified2 or console event log if policy mode was set to inline.

·          Two new rule options to support base64 decoding of certain pieces of data and inspection of the base64 data via subsequent rule options.

·          Updates to the Snort packet decoders for IPv6 for improvements to anomaly detection.

 In the latest version of snort some things about compiling  have slightly changed, the libdnet and the Data AcQuisition library (DAQ) needs to be be compiled separately. This post realtes only to compileation and installation of Snort 2.9.x.x from  source code.

LAMP (Linux, Apache, MySQL and PHP) environment:

Installing Apache, PHP and MySQL: 


$sudo apt-get -y install apache2 libapache2-mod-php5 mysql-server mysql-common mysql-client php5-mysql libmysqlclient-dev php5-gd php-pear libphp-adodb php5-cli 


Get required packages:


$ sudo apt-get -y install libwww-perl libnet1 libnet1-dev libpcre3 libpcre3-dev autoconf libcrypt-ssleay-perl libtool libssl-dev build-essential automake gcc make flex bison




 Download and Install libdnet:

There are Ubuntu packages for libdnet but this, I find, an easier way of installing. Download the following file and install it with these commands from your download directory:

$ sudo mkdir /usr/local/snort

$cd /usr/local/snort

$ sudo wget

$ sudo tar xzvf libdnet-1.12.tgz

$ cd libdnet-1.12/

$sudo ./configure

$sudo  make

$sudo make install

$sudo ln -s /usr/local/lib/libdnet.1.0.1 /usr/lib/libdnet.1



 Installing and Downloading Data Acquisition API (DAQ): 

Snort 2.9.0 introduces the new Data Acquisition API. We’ll need to download and install it before we set up the core Snort package. Download that package to your Snort machine:

If you need to access the /usr/local/snort via the GUI for copying of files, then go to root and type in:


$ gksudo nautilus

This should give a view of the root folders.  Install the package using the following commands: 

$cd /usr/local/snort

$ sudo tar zxvf daq-2.0.4.tar.gz

$cd daq-2.0.4

$ sudo ./configure

$ sudo make

$sudo make install

 Download and Install libpcap:

 $cd /usr/local/snort

 $ sudo wget

$ sudo tar zxvf libpcap-1.3.0.tar.gz

$cd libpcap-1.3.0

$ sudo ./configure

$sudo make

$sudo make install

$echo “/usr/local/lib” >> /etc/

$ldconfig -v 

Download and Install Snort: 

While we could install the Snort packages from the Ubuntu 14.04 repositories, that doesn’t guarantee the latest and greatest version of Snort being set up so we compile and install the source code. Go to and download the newest stable version.

The following steps will install Snort into /usr/local/snort but you can change this to a directory of your liking by modifying the paths below. 

Open a command prompt and issue the following commands from the directory where you downloaded the Snort 

$ sudo tar zxf snort-

$cd snort-

$ sudo ./configure –prefix=/usr/local/snort –enable-sourcefire

$ sudo make

$ sudo make install

$ sudo mkdir /var/log/snort

$ sudo mkdir /var/snort

$ sudo groupadd snort

$ sudo useradd -g snort snort

$ sudo chown snort:snort /var/log/snort

Download the Latest Snort Rules: 

The next step is to download the latest Snort ruleset. You’ll need to log into the Sourcefire site in a browser in order to get the file. The latest rules are located here:

There are two sections on this page – one for VRT subscribers and one for registered users. The only difference is that the registered user rule files are 30 days older than those for subscribers.

Download this file to your IDS machine: snortrules-snapshot-2960.tar.gz. 

Open a command prompt in the directory where you downloaded the Snort ruleset file and issue the following commands: 

$ sudo tar zxf snortrules-snapshot-2960.tar.gz -C /usr/local/snort

$ sudo mkdir /usr/local/snort/lib/snort_dynamicrules

$ sudo cp /usr/local/snort/so_rules/precompiled/Ubuntu-12-4/x86-64/* /usr/local/snort/lib/snort_dynamicrules

$ sudo touch /usr/local/snort/rules/white_list.rules

$ sudo touch /usr/local/snort/rules/black_list.rules

$ldconfig -v

Now we need to edit the snort.conf configuration file:

 $ sudo vi /usr/local/snort/etc/snort.conf

 var WHITE_LIST_PATH /usr/local/snort/rules

var BLACK_LIST_PATH /usr/local/snort/rules 

dynamicpreprocessor directory /usr/local/snort/lib/snort_dynamicpreprocessor/

dynamicengine /usr/local/snort/lib/snort_dynamicengine/

dynamicdetection directory /usr/local/snort/lib/snort_dynamicrules 

$output unified2: filename merged.log, limit 128, nostamp, mpls_event_types, vlan_event_types output unified2: filename snort.u2, limit 128


 Download and Install Barnyard2: 

Barnyard2 improves the efficiency of Snort by reducing the load on the main detection engine. It reads Snort’s unified logging output files and enters them into a database. If the database is unavailable Barnyard will input all data when the database comes back online so no alerts will be lost. 

$ sudo git clone barnyard 

$cd barnyard2

$ sudo autoreconf -fvi -I ./m4

$ sudo ./configure –with-mysql –with-mysql-libraries=/usr/lib/x86_64-linux-gnu

$ sudo make

$ sudo make install

$ sudo cp etc/barnyard2.conf /usr/local/snort/etc

$ sudo mkdir /var/log/barnyard2

$ sudo chmod 666 /var/log/barnyard2

$ sudo touch /var/log/snort/barnyard2.waldo

$ sudo chown snort.snort /var/log/snort/barnyard2.waldo 

We need to create the MySQL database and the database schema. Tis will need the MySQL password that was created earlier: 

$echo “create database snort;” | mysql -u root -p 

$ sudo mysql -u root -p -D snort < ./schemas/create_mysql 

Next create an additional MySQL user for Snort to use as it’s not a good idea to run the daemon as root. Remember the password that you enter below. Also please note the single quotes around the password in addition to the double quotes around the entire echo statement: 

$echo “grant create, insert, select, delete, update on snort.* to snort@localhost identified by ‘bhuvi'” | mysql -u root -p 

Modify the Barnyard2 configuration file with the following command: 

$vi /usr/local/snort/etc/barnyard2.conf 

config  reference_file: /usr/local/snort/etc/reference.config

config  classification_file: /usr/local/snort/etc/classification.config

config  gen_file: /usr/local/snort/etc/

config  sid_file: /usr/local/snort/etc/

config hostname: localhost

config interface: eth0

output database: log, mysql,

 Testing Snort:

 You can test to see if Snort will run by using this command: 

$ sudo /usr/local/snort/bin/snort -u snort -g snort -c /usr/local/snort/etc/snort.conf -i eth0

A message saying “Commencing packet processing.” should be visible. You can cancel it by hitting Control-C. If it fails to initialize please see the forums at to determine the problem. It will usually be something in the configuration file. 

To set Snort to start automatically on your machine edit the rc.local file with the following command:

 $sudo vi /etc/rc.local

ifconfig eth0 up

/usr/local/snort/bin/snort -D -u snort -g snort -c /usr/local/snort/etc/snort.conf -i eth0 /usr/local/bin/barnyard2 -c /usr/local/snort/etc/barnyard2.conf -d /var/log/snort -f snort.u2 -w /var/log/snort/barnyard2.waldo -D


 Save the file and exit. Then either reboot or use the following command to start Snort:

 $sudo /etc/init.d/rc.local start


Download and Set up Snort Report: The next step is to download and configure Snort Report. It’s available at under the downloads section.


 At the time of authoring this the current version was 1.3.4. Download snortreport-1.3.4.tar.gz to a directory on your IDS machine.Open a command prompt in the directory to which you downloaded Snort Report and issue the following 

$sudo tar zxvf snortreport-1.3.4.tar.gz -C /var/www/html 

Now we need to modify the Snort Report configuration file to reflect your MySQL login info and location of the jpgraph libraries. Change the file by editing srconf.php with this command: 

$sudo vi /var/www/snortreport-1.3.4/srconf.php

$pass = “bhuvi”;


Install JPGraph:



$cd /var/www/html

$sudo wget

$sudo tar xvzf jpgraph-3.5.0b1.tar.gz

$sudo rm -rf jpgraph-3.5.0b1.tar.gz

 Installing ADODB: 

$cd /var/www/html

$sudo wget

$sudo tar xzf adodb518a.tgz

$sudo rm adodb518a.tgz

 Installing and configuring Acid: 

$cd /var/www/html

$sudo wget

$sudo tar xzf acid-0.9.6b23.tar.gz

$sudo rm acid-0.9.6b23.tar.gz


$sudo vi /var/www/html/acid/acid_conf.php 

$DBlib_path = “/var/www/html/adodb518a”; 

$alert_dbname = “snort”;

$alert_host = “localhost”;

$alert_port = “”;

$alert_user = “snort”;

$alert_password = “bhuvi”;

$archive_dbname = “snort”;

$archive_host = “localhost”;

$archive_port = “”;

$archive_user = “snort”;

$archive_password = “bhuvi”;

 $ChartLib_path = “/var/www/html/jpgraph-3.5.0b1/src”;


Start Apache then go to http://yourhost/acid/acid_main.php . You will get a message that looks like this in your browser:


 Please Click the button that says “Create Acid AG” 

 Now browse to acid main page , it will show record details of snort…we are done!



Breakfast At Serengeti

Breakfast At Serengeti
Lion's Share

The Ngorongoro Family

The Ngorongoro Family
Click on the Picture Above To Make It Larger

Tabloid Time: The Aliens Are a'Landing ?!.. ;-)

At the risk of being ridiculed and being labelled a freak, I shall like to draw everyone's attention to the following recent events....If you watch the videos then turn on the sound for the commentary...

Fireball across Ausin, Texas (16th Feb 2009). According to BBC, apparently, its NOT debris from a recent satellite collision...:

Same in Idaho in recent times. NO meteor remains found yet:

Exactly same in Sweden:

This was recorded on 25th Feb 2007 in Dakota, US:

This year has seen three of the spookiest UFO videos surface, with people in India, Mexico and even in space, NASA, spotting things they couldn't explain:

CHECK out this one on 24th Januray, 2009 in Argentina close to Buenos Aires:
You tube:
Press Coverage

AND Lastly, and more importantly, from Buzz Aldrin on Apollo 11 :

Heh?! Don't know how authentic these news are... don't even know if these are UFO's or meteors or ball lightning or something else. But, if meteors, then where are the meteorites ? However, I see no reason why life cannot exist in other planets and why they could not be sneaking around here :-) . I for one, have long suspected some of my relations to be space aliens or at least X-people from X-files :-)

I am waiting for a job on an Alien spaceship myself. :-)

Giraffes in Parallel Universe

Giraffes in Parallel Universe
At Lake Manyara

Serengeti Shall Never Die

Serengeti Shall Never Die
Wildebeeste Calf starts running only 5 min. after being born. CLICK on the pitcture to view Slideshow