Traditional VMware Networking and the DVS
Cisco's Nexus 1000v is a Distributed Virtual Switch (DVS) software component for ESX and ESXi 4.0 and above.
The Distributed Virtual Switch (DVS) model presents an abstraction of having a single virtual switch across multiple ESX servers. DVS supports up to 64 ESX servers.
Before the DVS concept, ESX networking constituted a separately configured virtual switch (vSwitch) in each ESX server. The older concept presents a severe scalability challenge when working across large number of ESX servers as these switches have no coordinated management. Even performing a vMotion across two ESX servers using the traditional vSwitch can prove to be a bit of a head-ache.
The conceptual picture of the DVS abstraction as shown in the diagram below. Please note that the "DVS concept" implies that the DVS is managed from some "central location". VMware provides their own DVS where this point of management is the vCenter server.
The Nexus 1000v DVS
Basically Nexus 1000v provides a central supervisor (the VSM, or Virtual Supervisor Module) to be the central point of control of a single switch across for up to 64 ESX servers. This central supervisor is a virtual machine that runs NXOS, and its functionality is similar to that of the Nexus 5000 series.
Part of the Nexus1000v software that is actually installed in ESX is called VEM (Virtual Ethernet Module), which acts as a "virtual line card" for the supervisor. Like a line card, once the VEM is installed, there is absolutely no administration at the VEM level and he entire Nexus 1000v acts as a single unified switch.
In the Nexus 1000v model all network configuration is done at the VSM. The VMware vCenter server reads configuration of ports and port profiles provided by the VSM and is where virtual machines are "virtually wired" to the Nexus 1000v:
There is only one active VSM at a time. It is optional to configure VSM in HA-mode whereby a secondary VSM will take over for the primary (including taking over the IP address with which you access it) if the primary fails. The following diagram depicts this:
VSM-VEM Communication
The VSM communicates with each VEM in one of two fashions:
· Layer 2 Communication: VSM and VEM communicate via two special VLANs (actual VLAN numbers are assigned at Nexus 1000v installation time):
1. Control VLAN – all Nexus 1000v-specific configuration and status pass between VSM and VEM on this VLAN --- this is a low level protocol that does not use IP
2. Packet VLAN – certain Cisco switching protocol information, such as CDP (Cisco Discovery Protocol), pass on this VLAN – this does not use IP.
· Layer 3 Communication: VSM and VEM communicate using TCP/IP (there can be routers between VSM and VEM). There are no control and packet VLANs in this scenario.
VSM virtual network adapters
The VSM virtual machine itself always is defined with 3 virtual network interfaces – they are used and connected in the following order:
- Control interface – connected to control VLAN (unused if doing L3 control)
- Management interface --- "regular interface" with IP you access via ssh to log in to the VSM (and in L3 control used by VSM to control VEM)
- Packet interface – connected to packet VLAN (unused if doing L3 control)
Note in the L3 control scenario, the VSM still has to have three virtual network adapters, but the first and third adapters are not used for anything.
Nexus 1000v and VM Migration (vMotion)
A central goal of any DVS is envisaged to be able to support vMotion in a smooth, seamless way. One of the biggest challenges of vMotion is especially in the "traditional vSwitch" world because it does not work unless the separate vSwitches on the source and destination ESX servers are configured absolutely identically. This becomes a huge scalability challenge with many ESX servers, with the vSwitch providing no coordinated configuration management option. A DVS presents an abstraction of one single switch across multiple ESX servers and guarantees that there can be no virtual machine networking inconsistencies that would prevent the migration.
In the Nexus 1000v, the port down from the DVS that connects to a VM (a "veth" port) is an abstraction which naturally supports vMotion. When a VM migrates from one ESX to another, the "veth" identity of that VM does not change whatsoever; the abstraction is that the VM is still connected to the exact same virtual port both before and after vMotion. This guarantees that the virtual port configuration is identical before and after vMotion. From the point of view of the DVS, "nothing has changed" across a vMotion:
Nexus 1000v Features and Benefits
Nexus 1000v provides:
· Single point of management for VM networking across up to 64 ESX (same as any DVS)
· NXOS interface
· Choice of layer 2 or layer 3 control between VSM and VEMs
· Every Layer 2 feature supported by standards, extended down to the VM such as:
o IP filters
o Private VLANs (isolating VM's completely from each other, or in groups).
o SPAN / ERSPAN (Switch packet analysis – basically packet sniffing)
o
Nexus 1000v on UCS B-series Servers with Cisco Virtual NIC
The Cisco Virtual NIC provides flexibility for configuration of the Nexus 1000v. One can "manufacture" virtual hardware that will be visible to the ESX as if they were physical NICs (i.e. uplinks from the VEM).
The main strategy is to always manufacture these vNICs in pairs: configuring one member of every pair to connect to fabric A and the other to fabric B (without A-B failover). Redudancy between the pair will be provided by the Nexus 1000v's northbound load-balancing features (called mac pinning), which allows active/active uplinks without any special configuration on the northbound switch(es).
Following are some of the ways of adhering to the vNICs in pairs principle:
1. One pair of vNICs that carry all your required VLANs upward2. Two pairs of vNICs
a. VM actual data traffic
b. everything else (Control, Packet, Management, vMotion traffic)
3. Three pairs of vNICs:
a. VM actual data traffic
b. vMotion, management
c. Control and Packet traffic
4. X pairs of vNICs:
a. VM actual data traffic for one VLAN
b. VM actual data traffic for a different VLAN
c. vMotion for some VM's (on one vMotion VLAN)
d. vMotion for other VM's (on a different vMotion VLAN)
e. etc.
The following screen shot shows the intergrated configuration corresponding to "choice number 2 above:
vNIC Configuration from the UCS Point of View
The following screen shot shows the UCS configuration corresponding to "choice number 2" in the previous section (pair of vNICs for uplinks for VM data, pair of vNICs for everything else).
In addition, one more vNIC (the first one) is used for getting the whole thing installed. Since the VSM is running as a virtual machine inside these same ESX servers, it I necessary to start out with VSM on the regular "old vSwitch" and run all the applicable VLANs (control, packet, etc) down to this vSwitch. Later, this vNIC can be "abandoned" when all operations are switched over to N1K
Lab Topology
Following is the Lab Topology:
UCS View of the Lab Topology
Following is the topology of the lab as seen on UCS 1:
LAN View on UCS
VLANs in UCS
In UCS Manager on the Navigation Pane point to LAN tab -> LAN CLOUD -> VLANs as shown and expand the VLANs (note these are not specifically under the Fabric A or Fabric B sessions ---- 99.99% of the time VLANs are configured across both fabrics (not doing so would seem to be missing the point about how A-B redundancy works in UCSV).
UCS service Profile view:
UCS Storage View
The Prefab ESX Service Profile Template Created for the Organization
1. In UCS Manager, On the Servers tab, navigate to:
Service Profile TemplatesàrootàSub-Organizationsàyourorg
2. Highlight (single click) the template name ESXTemplate:
3. In the Content Pane, click on the Storage tab. You should be looking at something like:
Pay no attention to the "desired order" column for the vHBA's. The important part is that you have have two vHBA's for SAN access on the separate A and B VSANs. Note that your node WWN and port WWNs will be chosen from a pool when you create service profiles from the template in the next task.
4. Now click on the Network tab. The vNICs configured should be visible:
a. First vNIC is for "bootstrapping" ---- will be attached to the original vSwitch so that N1K can be installed and all networking can be migrated to the N1K DVS.
b. Next pair of vNICs, on fabrics A and B (vmnic1 and vmnic2) for N1K Uplink (VM data VLAN only)
c. Last pair of vNICs on fabrics A and B (vmnic 3 and vmnic 4) for N1K Uplink (all other VLANs besides VM data).
5. Clicking on the Boot Order tab will show the service profiles derived from this template which will try to boot off the hard disk first. When the Service Profiles are first created the associate hard disks will be scrubbed, so the boot will fall through to the first network adapter and the blade server will PXE-boot into a kickstart-install of ESXi.
Use vCenter and build a vCenter Data Center with two ESX hosts and Shared Storage
1. Invoke vSphere client and point it to the IP of vCenter
2. Highlight the name of vCenter, right click and create a new DataCenter. Name it MyDC
3. Highlight MyDC, right-click and select Add Host... On the wizard:
a. Host: enter the IP of the first ESX. Enter the user name and password (root/password) and click Next
b. Accept the ssh key (click yes)
c. Confirm the information and click Next
d. Keep it in Evaluation mode and click Next
e. Keep Lockdown Mode unchecked and click Next
f. Highlight the MyDCX and click Next
g. Review the information and click Finish
The ESX host will be added to vCenter. Once the vCenter agent is automatically loaded on the ESX host, the ESX host will be connected. If the ESX host still shows a red warning, that is fine (could be just warning about the amount of RAM), as long as it is connected.
4. Repeat step 3 and add the second ESX server
5. Highlight the first ESX server.
6. Click the Configuration tab.
7. Click Storage Adapters (inside the Hardware box on the left)
8. Click the Rescan All… blue link way near the upper right
9. On the popup, uncheck Scan for New VMFS Volumes (leave only Scan for New Storage Devices checked). Click OK.
10. Click Storage (inside the Hardware box on the left)
11. You should see the existing Storage1 datastore, which is the boot disk
12. Click Add Storage just above the list of existing datastores. In the wizard:
a. Choose Disk/LUN, and click next (it may scan for a while)
b. Highlight your 60GB LUN #2 (not a larger local disk, or LUN 1) and click Next
c. Select "Assign a new signature" and click Next
d. Review (nothing to choose) and click Next, then click Finish
13. Your new datastore should appear in the list; you may have to wait up to 15 seconds or so (it will be named snap-xxxx)
14. On the left pane, highlight your second ESX server
15. Click Storage Adapters (inside the Hardware box on the left)
16. Click the Rescan All… blue link way near the upper right
17. On the popup, uncheck Scan for New VMFS Volumes (leave only Scan for New Storage Devices checked). Click OK.
18. Click the Configuration tab.
19. Click Storage (inside the Hardware box on the left)
20. You should see the existing Storage1 datastore, which is the boot disk
21. If you do not yet see the shared storage (snap-xxx), click Refresh near the word Datastores near the upper right.
22. Your new datastore (snap-xxx) should appear in the list
Switch Between "Hosts and Clusters" and "Networking" Views in vSphere Client
There are a few ways to navigate back and forth, the easiest is to click on the word "Inventory":
Switch to the "Networking" view, as shown.
The display should like like the following. Note that since the DVS does not exist yet, all you are seeing are your "old vSwitch" port groups:
No comments:
Post a Comment