Vyenkatesh Deshpande posted May 3, 2013
In the last post here, I provided some details on vSphere hosts configured as VTEPs in a VXLAN deployment. Also, I briefly mentioned that Multicast protocol support is required in the physical network for VXLAN to work. Before I discuss how Multicast is utilized in VXLAN deployment, I want to briefly talk about some of basics on Multicast.
In the diagram below you see three main types of communication modes that are common in a network – Unicast, Broadcast and Multicast.
(Fig1-A) is best for one to one communication while broadcast
(Fig1-B) is best utilized when message has to be delivered to all nodes in a network. The devices in the network are capable of supporting Unicast and Broadcast
traffic. However, when a message has to be delivered to a selected few nodes in the network as shown in Fig1-C, unicast and broadcast modes are not efficient. For example, if Unicast
mode is used, the node on the left has to send a message to node 2 first, and then send the same message to node 2 again.
Multicast protocol support in the network allows optimal delivery for one to many communications. Instead of the end nodes sending multiple copies of message, the switches and routers perform that job.
How does Multicast work in IP network?
First of all, a unique IP address range is assigned as Multicast group IP address. It is a Class D address range from 188.8.131.52 to 184.108.40.206. Each address in this range designates a multicast group. Some of the addresses are reserved.
Any node (computer/user) in the network can join a multicast group using Internet Group Management Protocol (IGMP). For example, in Fig 1-C two nodes on the right have joined multicast group address 220.127.116.11.
After IGMP join requests, when an IP datagram with destination IP address of a multicast group is sent, it gets forwarded to every node that has joined that multicast group. For example, in Fig 1-C the node on the left sends a packet with destination IP address as 18.104.22.168. The network then delivers the packet to the two nodes on the right who had joined that multicast group earlier.
The devices in the network (Layer 2 switches and Layer 3 routers) run multicast protocols to support this optimal delivery of packets to the selected group of nodes. The following are some of the key protocols that are used in a multicast supported network
– Internet Group Management Protocol (IGMP v1, v2, v3). IGMP manages multicast groups using Query and Report messages.
– Multicast routing protocols – PIM – different modes (sparse, dense)
The network devices use these protocols to learn about which nodes have joined which multicast groups and where the nodes are in the network.
When it comes to VXLAN, the multicast support requirements in the physical network are dictated by the number of transport VLAN used in the design. As mentioned in the last post, the transport VLAN carries VXLAN encapsulated traffic. If you are using a single transport VLAN then there is no need for multicast routing protocol (PIM). However, you need the following functions enabled on the switches and routers
– IGMP snooping on Layer 2 Switch
– IGMP Querier on the Router
What is IGMP snooping? And how does it work?
We saw that multicast optimizes the delivery of the packets to the interested nodes. So the question is how does layer 2 network devices know which nodes are interested in which conversations or multicast groups?
The layer 2 switches monitor the IGMP query and report messages to find out which switch ports are subscribed to which multicast group. This functionality of a layer 2 switch is called IGMP snooping. The diagram below shows an example where there are two servers on the right streaming two different webcasts A and B. The users on the left choose to subscribe to a particular webcast by sending IGMP report messages.
IGMP Join request
The Layer 2 switch monitors IGMP packets sent by the users and makes entry in the forwarding table about the membership to particular multicast addresses. As you can see that multicast group address 22.214.171.124 is associated with Webcast A and 126.96.36.199 with Webcast B. In this example Port 1 and 2 are members of the multicast group 188.8.131.52 while Port 3 and 4 are members of 184.108.40.206.
The diagram below shows how the Webcast A packets with destination IP address 239.1.1.00 (Orange Arrow) sent to port 10 are only replicated to port 1 and 2 of the switch. Similarly the Webcast B traffic (Green Arrow) is only sent to port 3 and 4. User connected to port 5 is not subscribed to any Webcasts so it won’t receive any multicast traffic.
This shows how IGMP snooping capability on a physical switch optimizes the multicast packet delivery. Note that in this example each user has joined only one multicast group, but in reality they can join any number of multicast groups.
Why do you need IGMP querier?
IGMP querier is the function of a router and it is important to enable that for a proper IGMP snooping operation on layer 2 switches. We looked at how users join a multicast group by sending IGMP query messages. These messages are sent to the multicast router or IGMP querier. Without an IGMP querier to respond to, users do not send periodic membership requests. As a result, the entries in the layer 2 switch times out and multicast traffic is not delivered.
I hope this clarifies some of the commonly used multicast terminology and how basic things work in multicast. In the next post, I will cover the following things:
– Explain what is the relation between Layer 2 logical network in VXLAN and multicast group.
– When and how multicast is used in VXLAN?
– Explain that not all traffic is multicast in VXLAN deployment.
Here are the links to Part 1, Part 3, Part 4, Part 5
Get notification of these blogs postings and more VMware Networking information by following me on Twitter: @VMWNetworking.
About the Author
Vyenkatesh (Venky) Deshpande – is a Sr. Technical Marketing Manager at VMware and he is focused on the Networking aspects in the vSphere platform and vCloud Networking and Security product. Follow Venky on twitter @VMWNetworking.