March 20, 2014

Locator ID Separation Protocol (LISP)

In this blog, I will explain how the Data Plane and Control Plane works in a LISP environment. I assume that the reader is already aware of what LISP is and why it is deployed.

LISP Data Plane:

1.  The source host sends an IP packet destined to another LISP site. The source IP is the EID of the      source host and destination IP is the EID of the destination host. Remember, the hosts are not aware of the LISP environment.

2.  This packet gets forwarded normally towards the edge router (ITR) of the EID domain.

3. The ITR looks to see if it has a mapping for the destination EID in its map-cache. If it has, the ITR encapsulates the original IP packet into a LISP packet. The outer header will have source IP of the ITR's RLOC IP and destination IP will be the one found in the map-cache. The ITR forwards the packet into the RLOC space. If the mapping is not found in the map-cache, the ITR will initiate a map-request. This process is explained in the Control Plane section below.

4. The LISP encapsulated packet will be routed normally in the RLOC space and will arrive at the ETR of the destination EID.

5. The ETR will examine the LISP packet. If the destination IP on the outer header is that of its own, it will decapsulate the LISP packet, take out the original packet and forward inside the EID domain towards the final destination.

LISP Control Plane:

The main job of the LISP control plane is to provide the EID-to-RLOC mappings to the requesting ITRs.

1.  The ETRs responsible for the EID space will send a map-register message to its configured Map Server (MS).

2. The MS will advertise (inject) the EID prefix into the ALT topology, just like any other prefix in BGP.

3. The Map Resolvers (MR) will learn the EID prefix via the ALT topology. Remember, MS and MR are part of ALT topology. The MR now knows how to reach the EID prefix.

4. Now, when the ITR does not have EID-to-RLOC mapping in its map-cache (see point 3 in Data Plane section), it sends a map-request message to its configured MR. If the ITR is not part of the ALT, it will encapsulate the map-request and send it towards the IP address of the MR. Also remember, if the EID-to-RLOC mapping is not available in the map-cache, the ITR will put the original EID destination IP in its outer header's destination IP.

If the ITR is part of the ALT topology, it will just forward the map-request in the ALT topology. Remember, the ALT routers have the EID prefix in its routing table. In this case, we assume ETR is also part of ALT, so that the map-request directly reaches the ETR, without using MR/MS. MR/MS mapping infrastructure is used to scale the LIPS for large deployments, such as Internet.

5. The MR will decapsulate the map-request and forward the map-request in the ALT. The map-request will have the destination EID as the destination IP in the packet. So, the map-request will be forwarded towards the MS over the ALT. Remember, MS is the one that injected the EID into the ALT.

6. After the map-request reaches the MS, it will consult its mapping database to find out the RLOC for the EID in question. MS knows this information because the ETR has registered with MS in step 1.

7. MS will encapsulate the map-request and forward it to the RLOC (ETR's outside IP address).

8. ETR will get the map-request and will directly send a map-reply to the ITR's outside IP address.

9. The ITR will receive the EID-to-RLOC mapping and keeps this mapping in its map-cache for future use.

Thanks for reading. Please leave a comment.





March 2, 2014

Network Adapter, NIC, VIC, vNIC, vmnic, veth, NIV, HBA, vHBA vFC etc.

Welcome to the world of Network Virtualization...

In the old world of physical servers, we have heard about NICs and HBAs. NIC is also called the Network Adapter.

Step into the new world of server virtualization, we see a lot of new acronyms describing how the physical servers and virtual machines talk between themselves and to the outside world. If you are a network engineer and starting to know the server virtualization world better, you will, for sure, encounter these confusing acronyms. If you are not very clear on what this means, it can be hard to understand the concepts behind network virtualization.

While the objective of this blog is to make an attempt to clarify these acronyms, this blog does not explain network virtualization concepts.

Scott Lowe has a wonderful blog on Understanding Network Virtualization.

Network Adapter: This is from the old world of physical servers. Network Adapter (aka NIC), is a physical hardware device that sits on the PCI bus of an x86 server, used for transmitting and receiving data traffic from and to the physical server. The server operating system will see this as a single network adapter for that server. The physical port on the network adapter will be cabled to the access layer network switch. Simple and straightforward.

NIV: NIV stands for Network Interface Virtualization. This is just a technology that is used to virtualize the network adapters (on an NIV-capable physical Network Adapter) or interfaces (on an NIV-Capable physical switch). The NIV-capable Network Adapter is also called the Interface virtualizer.

VIC: VIC stands for Virtual Interface Card. Though the name has 'virtual' in it, this is a physical adapter that is capable of creating virtual adapters (both vNIC and vHBA) within itself. That is, within a single physical VIC, we can create multiple logical network adapters of type NIC or HBA.  These logical network adapters are called vNICs, or virtual Network Interface Cards and vHBAs. In Cisco UCS, M81KR is a VIC (mezzanine card) that is used with UCS B-Series blade servers and P81E is a VIC used with UCS C-Series Rack mount servers. These VICs can create up to 128 virtual adapters within a single physical adapter.

vNIC: Virtual NIC. The vNIC definition differs in Cisco and VMWare.

In the Cisco world, this is the logical adapter created within the physical VIC, as described above. vNICs will be visible to server operating systems as PCI devices. In other words, the server OS (or the hypervisor in the virtualized environment)  will see these logical network adapters as individual physical adapters, that is, the operating systems will treat these vNICs as though they are physical adapters.

In VMWare world, vNIC is the logical adapter belonging to Virtual Machine Guest Operating System and is internally connected to a vSwitch (such as VEM of Nexus 1000v). The physical adapter attached to the ESXi host is the vmnic.

vmnic: vmnic is a virtual-machine network interface card, also a physical adapter on VMWare ESXi host. On an ESXi host, vmnic is the physical adapter that is physically connected to the access-layer switch. vnmic is also internally connected to the vSwitch (such as Nexus 1000v) and serves as the uplink port for the VEM (Virtual Ethernet Module) of Nexus 1000v.

veth: This is a logical ethernet interface created on an NIV-capable switch, such as Nexus 1000v, Nexus 5000, Nexus 7000, or UCS Fabric Interconnect. For example, a veth is created (either automatically or manually) on VEM of Nexus 1000v, whenever a virtual machine is created by the VMWare administrator and a port-group is associated with the VM's vNIC. The VM's vNIC and its corresponding veth interface are logically connected by a virtual network link (VN_Link). This VN_Link uses VN_Tag.

HBA: Host Bus Adapter. This is the traditional physical adapter on a physical server that is used to physically connect the server to a Fibre Channel Network.

vHBA: Virtual HBA. This is a logical HBA created on a physical VIC, such as Cisco UCS M81KR. Just like how multiple logical network adapters (vNICs) can be created on a physical VIC, multiple logical HBAs (vHBA) can also be created on a VIC such as M81KR or P81E. The Server OS (or the hypervisor in a virtualized environment), will see these logical HBAs as individual physical PCI devices.

vFC: Virtual Fibre Channel interface. This is a logical fibre channel interface created on a NIV capable switch (such as Nexus 5500 or UCS Fabric Interconnect), just like veth interface for data. The logical HBA (vHBA) on the server (N_Port in Fibre Channel terminology) is logically connected to the vFC (F_Port in native Fibre Channel terminology) the Switch just like how a vNIC is connected to veth. VN_Tag is used to identify this logical connection over the physical cable. This will be the case when you use FCoE (Fibre Channel over Ethernet) technology.

I hope I have given some clarity to these acronyms. Please leave a comment and feel free to correct me if any information is incorrect.

Thanks for reading.

Mohan Muthu