Wednesday, 30 January 2013

Testing SAN, Virtualization and WAN Optimization on a Budget Lab

1      Introduction

The explosive growth of Virtualization in the technological domain and its increasingly critical role in supporting a vast globalised customer base ranging between electronic commerce, health-services, defense ministries, transportation, and communications, have brought forth a major focus on solutions of services offered in such environments. It takes significant time and labor to orchestrate major, enterprise-wide improvements in the computing environment and to try and simulate every customer's unique SAN and virtualization needs. These Project deployments take months of planning and evaluation, piloting and production transition. In addition, the process requires dedicated expert evaluators and management supervision. A test-lab can reduce the costs, personnel requirements and time to perform major solution evaluations, testings and Technology Implementations.

With the arrival of new suit of products such as VMware vSphere, Riverbed Granite, which are envisaged to be Solution oriented services for customers with major SAN back-up requirements, the need is more than ever to consider building a test lab. In the lab it will be possible to conduct technology investigations and to evaluate solutions for new customers with unique new requirements of SAN back-up and virtualization; with this in mind, here’s a proposal for technology investigation, evaluation process and possible eventual business collateral of the above for customers with networks using current and emerging technologies.  This process may also be responsible for identifying and training of new virtualization technologies and evaluating them for incorporation into various customer infrastructures.

In this first draft for the Test & Validation Lab proposal, emphasis has been given to virtualization of environments to keep the initial costs within a realistic minimum margin. Emphasis also has been given to Storage Area Networks (SAN) keeping in mind nature of upcoming technological solution offerings.

1.1      Objectives 

  • To establish a testing resource used for the evaluation and integration of upcoming new technologies and new releases of installed base technologies.
  • To establish an enterprise and SP architecture lab environment where these new technologies can be evaluated and business applications are production readiness tested.
  • Once established, the laboratory environment can be customized to simulate prospective customer networks. Same customers may be given the opportunity to try out various technologies and solutions on offer, including SAN back-ups and virtualized services as a POC (Proof Of Concept) laboratory as a paid service.
  • A shared environment for new technology evaluation and integration
  • A shared environment for new applications testing
  • An ITIL compliant environment that closely relates to and simulates the actual customer infrastructure
  • An environment that fosters progress of the architecture and solution services offered through research and development, thereby keeping current with technology trends.
  • Develop Marketable, Value-Added Customer Services. 
The lab should be chartered to evaluate emerging technologies in support of a customer’s business goals. The lab will contain the hardware, software, communications and personnel required to evaluate solutions, products and services deemed beneficial for deployment of the outfit’s global customer base.
This lab will develop standards for solution oriented products. Any new products, upgrades, fixes or new releases will be evaluated against all the other products on the standard products list of the engineering group to ensure interoperability and backward compatibility. Evaluations can be specific to one application-- such as a SAP suite--or applicable to the entire organization--such as a SAN or a digital certificate service. 

The lab also can be used as an internal resource center, a place where interested users and customers can come to try out new hardware and software based solutions before recommending its installation or prior to purchase.

Several steps are required to get the lab established and fully functional: 

·         Mainly  needs to find funding for the construction of the lab
·         Choose the lab personnel
·         Select and procure the hardware, software and communications resources
·         Build processes for conducting evaluations, purchasing and promotion; and
          manage the lab effectively on a daily basis.

1.1      Addressed In This Document 

·         Identify the sub-component technologies required to establish a functioning 
·         Develop the proposal to justify establishing a Lab
·         Conduct a site selection
·         Conduct a lab services requirements analysis
·         Proposal for management of the Lab.
·         Develop mission statement and policies for the Lab
·         Acquire and integrate available technology resources to establish the initial Lab
·         Support new emerging technologies and new product release evaluations and
         integration testing
·         Begin new applications testing for compliance with various standards (for
          example ITIL) and production acceptance testing. 
Customer's storage networks require constant maintenance and attention. If there's a lull in storage environment when no repair works are taking place or the SAN is not recovering from a disaster, it is not necessarily a time to relax. These calm moments should be spent improving concepts of solutions and services centered around storage networks and ironing out any kinks that customers may have in preparation for the next disaster, repair or solution offered.
Following are a few rules of thumb for staying abreast with the ever changing scenario of SAN and virtualization, centered on lab-based testings:
Understanding What is in the Data Storage Environment:
Making sure that an updated inventory of networking environment exists at all times. It's crucial to document every host bus adapter (HBA), cable and switch; note how they're interconnected, the speeds they're set at, and the software or drivers they're running. Although these tasks seem obvious and mundane, they are often overlooked and knocked off priority lists during a typical work day.
More importantly, recording this information often highlights areas that need improvement. For example, there have been cases where users upgraded to 4 GB Fibre Channel (FC), but their interswitch links (ISLs) are still set at 1 GB. If they had taken the time to document these changes, they could have had doubled their performance much earlier.
Knowing What's going on in the Storage Network Infrastructure:
Once a good picture of the components in the storage network infrastructure has been developed, the next step is to fully understand what those devices are doing at a particular moment in time. Many switch and HBA vendors build some of these capabilities into their products. But instead of going to each device to see its view of traffic conditions, it may be better to find a tool that can provide consolidated real-time feedback on how data is traversing your network. There are software solutions and physical layer access tools that can report on the infrastructure traffic. The tools that can monitor network devices specifically are important because there are situations where operating systems or applications report inaccurate information when compared to what the device is reporting.
These tools can be used for trend analysis and, in some cases, they can simulate an upcoming occurrence of a data storage infrastructure problem. For example, if an ISL is seeing a steady increase in traffic, the ability to trend that traffic growth will help identify how soon an application rebalance or an increase in ISL bandwidth will be required. Other tools will report on CRC or packet errors to ports, which can indicate an upcoming SFP failure.
Knowing What is Desired:
With the inventory complete and good visibility into the SAN established, the next step is to figure out what network changes will provide the most benefit to the organization. It is possible that SAN features will be discovered which need to be enabled, or perhaps new applications will need to be added or an accelerated rollout of current initiatives that need to be planned. Knowing how activities such as these will impact the rest of the environment and what role the storage infrastructure has to play in those tasks is critical. Generally, the goals come down to increasing reliability or performance, but they may also be to reduce costs.
Limiting the Impact:
When the stage arrives where it will be feasible to make changes to the environment, the next step is to consider limiting the sphere of impact as much as possible by subdividing the SAN into virtual SANs (VSANs). 
Subdividing (in a worst-case scenario) changes made to the environment that yield unexpected results, like preventing a server from accessing storage or even causing an outage, will have limited repercussions across the infrastructure. Limiting the sphere of impact is by itself an important fine-tuning step that will help create an environment that's more resilient to changes in the future, and can help contain problems. For example, an application may suddenly need an excessive amount of storage resources; subdividing the SAN will help contain it and keep the rest of the infrastructure from being starved. This aspect of fine-tuning shouldn't require any new purchases as it's a setup and configuration process.

2.1      Test to Learn, Learn to Test:

One key to fine-tuning and guaranteeing a worry-free SAN solution to customers is to have a permanent testing lab that can be used to try out proposed changes to the environment or to simulate failed conditions.  
Lab testing allows exploration of the alternatives and develop remedies without impacting the production network. Statistically most SAN emergencies result from implementing a new feature in the storage array or on the SAN.
If the customer lacks the resources to create a lab environment,  labs may be an alternative for them, offering them with facilities that can be used to recreate problems or to simulate SAN back-up behavior or test the implementation of new features.

This section tries to describe a high-level design of the minimum architecture of the first phase of the  Test laboratory.

3.1      Facility Requirements

The facility must be accessible to a diverse set of users if it is to accelerate the development technical solutions. Most research organizations, vendors, and small ISPs are simply unable to conduct truly valid tests within their own environments because of the prohibitive cost of deploying a test network that is representative of relevant technical conditions.  Methods for generating background and meaningful traffic can be generated using open source tools for a start that should adequately simulate a real networking environment. Making the facility available to authorised users will enable a broad range of researchers and testers to make meaningful contributions toward solving and developing future technical solutions.  

3.1.1     Users

To be permitted to use the facility, however, users must be able to demonstrate that their proposed use of the facility will advance the state-of-the-practice, is technically valid, and is consistent with the facility’s charter. It should not be open to simply anyone who wishes to make use of a complex network. 
A robust lab study to base purchasing decisions or recommendations always involves a certain degree of challenge. Primarily, because testing WAN optimization and related technologies in a lab doesn’t typically expose the complexities of production environments.  Furthermore, testing WAN optimization isn’t as straight-forward or as well understood as other standardized technologies. Many of the popular automated test tools don’t support the application layer (OSI Layer 5-7) fluency needed to properly evaluate application acceleration for protocols like MAPI, CIFS, NFS, etc. 
Next, the boilerplate content in these tools does not represent production datasets at the byte level so the ‘out-of-the-box’ deduplication test results aren’t representative. Many times the test tools measure per-packet or inter-packet metrics, while WAN optimizers are focused on per-transaction optimization by proxying TCP connections; this means having three different sets of packets per transaction (client-side, inner channel & server-side).  So the test tool ‘timing’ reports are not always applicable or representative of the actual user experience.  Also, the application security infrastructure and requirements in a lab may not mirror the production environment (I.E. encrypted Outlook/Exchange, CIFS SMB Signing, Kerberos/NTLMv2 authentication, SSL requires smart-cards, etc). These are just a few of the many hurdles you need to be aware of in test environments.

3.2      “Breakable” Network 

One of the expected results of a test-lab based experiment is that some pieces of the network may be disrupted or disabled under the testing load. Network outages must be expected.

3.3      Rapid Reconfiguration

When the test network is to be used by a new testing user, it is very likely that the physical and logical configuration of the network will need to be reset to a known neutral state and then changed to accommodate the needs of the new tests to be performed. It is also likely that each client will need to perform a series of experiments and each of these experiments will be based on a different network topology. As such, this setup work could easily consume a lot of time if the changes to be made are extensive. In order for the facility to be viable, a quick reconfiguration methodology must be deployed along with the facility itself. Without a fast turnover methodology in place, the number of evaluations and experiments that could be conducted within the facility over the course of the year would be greatly diminished.
The following types of configuration information will need to be identified as one experiment or evaluation team replaces another:
·         Physical topology reconfiguration;
·         Physical device model requirements;
·         Node operating system type, version, and contained software;
·         Network equipment software revisions ( for example, RiOS and IOS versions);
·         Experimentation data removal and restoring devices to a known operational
         state; and
·         Management infrastructure type and configuration.

3.4      Realistic Network Traffic

The most difficult part of designing and conducting any tests via a simulated evaluation is ensuring that the evaluation environment closely models the real world. To succeed in this, users must carefully construct a simulation environment with appropriate network traffic traversing the test network. This traffic must include not only meaningful traffic for study, but typical day-to-day (legitimate use) traffic as well.
The type of traffic needed to emulate a realistic environment will be specific to each client of the facility. For example, clients that need to emulate a web hosting facility environment will need different types of network traffic from those clients who need to emulate a university network. Traffic to and from web hosting facilities is almost entirely composed of World Wide Web traffic. Traffic to and from university environments, on the other hand, is frequently composed of a wide variety of traffic types that include not only web traffic, but also file sharing, music streaming, login connections, email transmission, and others. Ideally, the facility should be capable of representing network traffic needed by any client.

3.4.1     Statistical Traffic Generation

Many companies sell tools today that are capable of simulating traffic in a network environment by first statistically analyzing live data and building traffic profiles. They attempt to generate statistically similar, but artificial, traffic that models the same profiles. These traffic generation methods should reproduce not just the long-term average traffic characteristics, but also the highly bursty and self-similar nature of Internet traffic. Tools such as SmartBits currently offer products that do statistical traffic generation, though these products are often too expensive for smaller research groups. Furthermore, the traffic these packages generate is based on random data and does not truly represent typical interactive session traffic. Many vendors and researchers do make use of these tools for preliminary analysis and testing since they are easy to find and simple to set up and use, and since they are often the only tools available.

3.4.2     Traffic Generation Using Network Applications and Library of Tools

The easiest and cheapest method of producing meaningful traffic is to simply run real traffic generating tools on a variety of end-hosts and have them access a particular target within the test network. Researchers also need to produce background traffic by using networking software that can be automated (i.e., scripted) to simulate real users generating real network traffic. This approach works well in practice and is widely used by research organizations with small budgets. Unlike the statistical traffic generators discussed above,  tools and scripted network clients produce traffic with normal packet contents rather than random data. However, the generated traffic does not model the seeming random variations in traffic patterns produced by real human users sitting at networking consoles. It is still a useful technique, so the facility must have a library of network applications and tools on-hand. These tools may be brought in from the wild, but should be inspected to ensure that there are no backdoors or other unintended features present. 
With today’s fast processor systems, a small number of hosts can easily generate a lot of traffic using this technique. Many research groups that do not need to simulate complex networks rely on this type of traffic generation technique.

3.4.3     Simulating Network Delays and Congestion

When network traffic traverses a large disparate network, it is subjected to various delays in transit. These delays are a result of limited processing power of devices it must pass through as well as issues with crossing large physical distances. In a contained network facility existing solely at a single site, distance-related delays are mostly nonexistent. Some research environments, like Emulab, have solved this problem by carefully constructing delay devices that merely hold each packet for a period of time before letting it pass on. These delay devices should work well for small bandwidth connections, but simulating a delay at high speeds like an OC-192 link (near 4.8 Gbps) would be more difficult. The facility must support network-delay simulation for at least some portion of the network, even if simulating them at high speeds is not technically possible at first.
Network congestion also affects performance in a network and must be simulated in order to closely model a real world environment. It might be necessary, at times, to simply shrink the effective size of a given network link by pushing a fixed amount of traffic through it to decrease the link’s available bandwidth. Emulab has addressed this problem and has developed techniques for applying a rate limit to a given connection by carefully adding in traffic generators and traffic sinks, which are designed to send a fixed stream of data across one or more network nodes. This type of technology should be present one way or another in any facility intending to simulate real world networks.

3.5      Network Management, Monitoring, and Analysis

The facility should be able to collect performance data from SNMP, cflowd, or Netflow. These are common network management mechanisms used today. Network management utilities, such as  Cascade and Opnet tools that can analyze and generate reports from these data sources should be available for use. Network topology mapping tools, some also available from Opnet, can then be used to compare a deployed network against a desired network to ensure that the facility has been configured properly for the experiment.
These network monitoring and traffic analysis tools may be used to measure network conditions over the course of an experiment.

3.6      Administrative Staffing

Staffing the lab should be easier than most technical jobs.  Working with the Management Team, staff will be required to be familiar with the latest products, conduct extensive market research, keep up with the trade through attending vendor presentations, and reading trade journals. 
There will be a need for at least one or two part-time staff to start, a work-load which can be shared by local Professional Services consultants. As the workload increases, given an encouraging business collateral environment, additional full-time or part-time people will be needed for the big evaluations.
At least one part-time lab technician will be needed, who will set up the tests, keep installed releases current (latest OS release and patches, service packs, selected messaging applications and Web browsers) and keep the lab itself neat and tidy.

3.7      Infrastructure Needed & Site Requirements 

The lab should be located in a facility with a 10 feet high ceiling; floor space should be sufficient to open the lab and add equipment for two years without being cramped for space or having to move.
The high ceiling is necessary for overhead cable racks that will keep the floor area clean. Existing raised floor area may be used.
§  space for equipment racks
§  7' tall equipment cabinets to hold rack-mounted machines
§  three-tiered open metalwork racks
§  Un-interruptible Power Supplies (UPS)
§  work areas
Test areas
§  storage
§  network drops
§  modem lines
§  sufficient power plugs
§  connections in close proximity to the machines enable the simple set up and take down of large tests
§  good fire alarm and control system, such as halon gas
A space-saving idea is to install master switches that use one monitor, mouse and keyboard to control several machines.


3.7.1     Security

There will be a need for security both physical and logical security for the lab:
Physical security involves controlling access to the lab room. Typical methods are installing locks, finger locks, access badge readers etc.  Access will be limited to LAB personnel, accompanied personnel and testing groups.
Logical security protects access to the computer equipment and software programs by means of administrator passwords or strong authentication such as public/private key access control, digital signatures and secure cards.

3.8      Scheduling

Scheduling involves determining who has priority to use the services, for how long and during what hours. Determining who gets priority is no easy task. There will always be more activities than resources to do them.

3.8.1     Procedures for Scheduling

Requests to use the lab must be funneled through the lab manager or representative of the lab's administrator. This procedure will ensure that all requests are treated as equitably as possible and that the lab's resources are not over-taxed. It's important to have agreement on the request procedures from those who will use the lab most often. They need to feel that the lab is accessible, ready, willing and able to meet their testing needs.
The schedule for the lab will be posted on the lab's Intranet Web site, and perhaps also on the front door of the lab. The schedule will reflect what software will be loaded on the machines, what hardware is planned for testing, what beta code is in, what major evaluations are in progress and their status.

3.9      Network Topology

The topology of a newly constructed facility must satisfy certain criteria to meet the needs of its prospective users. The facility should be able to emulate some of the current networking topology. It should be a scaled down but functionally accurate representation of generally found networks in enterprises; at the same time the deployed devices should offer the flexibility to be changed and configured to reflect the diverse network topologies of a diverse client base.  
It should also be able to emulate various types of smaller attached networks at the edges of this core. The facility should be able to represent various security exposures that exist in networks today.
The networking and hosting hardware must be representative of up-to-dated equipment found in most networking environments.
Critical to the success of the test lab is the versatility of its network layer. The lab needs to be a microcosm of the organization at large in terms of LAN, WAN and SAN connectivity. Network operating systems, network protocols, hubs, routers and switches should model those deployed within the enterprise.
A fast Internet connection is needed so that staff can conduct research on new products, services and technology and to test internet based applications with simulated loads.
Also needed is to have modems and wireless connections to mimic remote-access configurations for mobile workers.
The segment should be fire-walled so that the test network and the production network do not interfere with each other.
The proposed initial high-level topology is shown in the following figure:

3.9.1     Network Backbone

To properly represent today's stae-of-the-art enterprise and Service Provider networks, the design of the facility’s backbone network should be able to run the MPLS, Border Gateway Protocol (BGP), OSPF etc. 
The facility’s backbone network must have routers that are representative of those used by the Tier-1 service providers and enterprises. Cisco Systems and Juniper Networks manufacture the majority of the core backbone routers that are currently used by most of the end customer base.
To be able to offer versatility in Layer3 routing domains, various Layer 2 protocols, QoS etc., it is imperative that the network backbone be built up in as robust a mode as possible.  
1.     In order to accomplish the above tasks following equipment can be considered at this preliminary stage:
  • Interfaces supporting 10 GigaBit Ethernet are recommended in permitted by budget; if not, 1 GigE should suffice for a start.
  • Two to three physical servers:
    • Example Dell PowerEdge Server suite (Tower or Blade) as cost permits; 24 GB RAM, consistent CPU’s: approx. £1500.00
  • Gigabit network Backbone: Routers, Switches or RSM modules (Cisco 3750 will suffice); available from eBay.: eBay price for approx.. 10 MPLS capable router and switch module is £3000.00
2.     Software considered may include but are not restricted to the following:
·         VMware ESXi Latest version
·         VMware Virtual Center Latest version
·         VMware Site Recovery Manager Latest version
·         The EMC Celerra Virtual Storage Appliance (VSA)
·         Microsoft Windows 2003/2007 Server R2 Enterprise Edition for use with the VMware Virtual Center Server
·         Microsoft Windows 7 for use as a utility device as required. It makes sense to use several of these for SRM
·         Windows 2007 Server R2 Standard Edition or Enterprise for various servers that amy be tested against. e.g., Active Directory, DNS, SQL, Exchange, etc.
·         Other Operating Systems such as Fedora Linux, CentOS or Ubuntu
·         EMC Celerra Replicator Adapter for VMware Site Recovery Manager
·         Software based FC adaptors in VM
·         SAN simulation: iSCSI StarWind free SAN simulator, FreeNAS, Openfiler, EMC Celerra, ZFS etc.
·         Virtualized Data Center computing simulation: e.g, Cisco Unified Computing System Platform Emulator Lite (UCSPE) and Cisco UCS Manager Emulator
·         Virtual Whitewater for back-up

3.9.2     General Services Infrastructure

  • There are several components in this lab that work is ther will be Active Directory, Domain Name Server (DNS), Windows Internet Name Service (WINS) and DHCP enabled. These do not have to be large elaborate implementations. In fact, all of these services can be setup in a single VM on one of the ESX hosts with simple domain names such as 
  • DHCP scope to hand out TCP/IP addresses, a default gateway, DNS and WINS information can also be created. 
  • The client devices can register host information automatically with DNS which is a checkbox in the TCP/IP properties of the Ethernet adapter for the defined virtual machines. 
  • Devices with static IP addresses can be added to the DNS tables as “A” record, host, entries; the PTR (reverse lookup) check box must be clicked.
  • A  note about disk space: it may be prudent to plan on allocating disk capacity for the primary and DR site Celerra VSAs as follows:
    • Primary site: as required/desired to support the environment. Default, absolute required minimum, is 40GB. Space should be added as required.
    • DR Site: a file system that is 2.5-3 times larger that the replication target LUN should be allocated. With out this space there may arise issues creating snapshots required for Site Recovery Manager.

3.10      Marketing, Promotion and Support of the Lab

3.10.1     Lab Marketing & Promotion:

The Lab should be given an eye-catching name or acronym.
Lab activities should be considered to be communicated to the entire corporate internal community. This internal marketing of lessons learned, configuration advice and knowledge on various new products, services, upgrades, represents a value-added service and can reap the benefits of continuous funding and resource allocation.  
Status reports for the lab can be pushed to interested parties from a list server placed on an intranet or Twiki.  
When the lab offers the computing, consulting and evaluation services to others within the company, the lab performs a valuable service. Opportunities should be constantly searched for to help groups within the organization with their problems, keeping current on real world problems.
The lab’s evaluation plans and accomplishments should be detailed on the lab's Intranet home page.  An example would be a chart showing the lab's usage (number of products tested, number of users testing, resources utilization, etc.) on a monthly basis; this may help to manage user expectations and convey to management the extent of the lab's use within the enterprise. 

3.10.2     Lab Support:

It is envisaged that the lab stack should be able to support the networking gears (Cisco routers and switches, Riverbed Steelheads, Interceptors, Granite suite, Cascade, Opnet, VMware suite of products etc.), server components and virtualized environments by themselves during the starting period of the laboratory. Depending on how far and wide the activities centering the laboratory will eventually extend, it may be necessary to hire personnel in future.

3.11      Procurement of Infrastructure Component

As a start, following could be considered to obtain the physical devices which may keep the initial cost to a minimum:
  • Networking: Most of the networking gears (Cisco routers and switches) is possible via cost-friendly eBay. Support of the same and upgrading of IOS etc.can be done by local CCIE Consultant(s) for a start.
  • Physical SAN’s: It may be possible to source R.M.A-ed devices (steelheads, whitewater for example) from refurbished outfits and convert them into physical SAN’s. Support of the same can be done by local Engineers/Consultant(s) for a start.
  • IP addresses for the internal test network will required. This can applied for from local IT support for Projects.
  • Rack-Space will be necessary. This has to be applied for at the premises administration authority.
  • Software: Many of the software used will be either open-source or are available for free download. However, licenses may need to be purchased for some others such as Windows servers etc. It may be possible to obtain some of the licenses internally from IT for Projects; they may already have bulk licenses for Microsoft and VMware products due to partnerships.
Many of the software used will be either open-source or are available for free download. However, licenses may need to be purchased for some others such as Windows servers etc. It may be possible to obtain licenses for the same internally from  corporate IT for Projects; they may already have bulk licenses for Microsoft and VMware products due to partnerships.
At the time of authoring of this first draft, a total approximate cost of GBP 10,000.00 to 13,000.00 is estimated to be the maximum cost to start up the proposed  solution laboratory in the EMEA.

No comments:

Breakfast At Serengeti

Breakfast At Serengeti
Lion's Share

The Ngorongoro Family

The Ngorongoro Family
Click on the Picture Above To Make It Larger

Tabloid Time: The Aliens Are a'Landing ?!.. ;-)

At the risk of being ridiculed and being labelled a freak, I shall like to draw everyone's attention to the following recent events....If you watch the videos then turn on the sound for the commentary...

Fireball across Ausin, Texas (16th Feb 2009). According to BBC, apparently, its NOT debris from a recent satellite collision...:

Same in Idaho in recent times. NO meteor remains found yet:

Exactly same in Sweden:

This was recorded on 25th Feb 2007 in Dakota, US:

This year has seen three of the spookiest UFO videos surface, with people in India, Mexico and even in space, NASA, spotting things they couldn't explain:

CHECK out this one on 24th Januray, 2009 in Argentina close to Buenos Aires:
You tube:
Press Coverage

AND Lastly, and more importantly, from Buzz Aldrin on Apollo 11 :

Heh?! Don't know how authentic these news are... don't even know if these are UFO's or meteors or ball lightning or something else. But, if meteors, then where are the meteorites ? However, I see no reason why life cannot exist in other planets and why they could not be sneaking around here :-) . I for one, have long suspected some of my relations to be space aliens or at least X-people from X-files :-)

I am waiting for a job on an Alien spaceship myself. :-)

Giraffes in Parallel Universe

Giraffes in Parallel Universe
At Lake Manyara

Serengeti Shall Never Die

Serengeti Shall Never Die
Wildebeeste Calf starts running only 5 min. after being born. CLICK on the pitcture to view Slideshow