Mar 02

What’s New in VMware® Virtual SAN™


The annual VMware® user conference, VMworld®, introduced the vision of VMware for the software-defined data center (SDDC) in 2012. The SDDC is the VMware cloud architecture in which all pillars of the data center—including compute, storage, networks, and associated services—are virtualized. In this white paper, we look at one aspect of the VMware SDDC, the storage pillar. We specifically discuss how a new product, VMware Virtual SAN™, fits into this vision.

VMware Virtual SAN

Virtual SAN is a new software-defined storage solution that is fully integrated with vSphere. Virtual SAN aggregates locally attached disks in a vSphere cluster to create a storage solution that rapidly can be provisioned from VMware vCenter™ during virtual machine provisioning operations. It is an example of a hypervisor-converged platform—that is, a solution in which storage and compute for virtual machines are combined into a single device, with storage’s being provided within the hypervisor itself as opposed to via a storage virtual machine running alongside other virtual machines.

Virtual SAN is an object-based storage system designed to provide virtual machine–centric storage services and capabilities through a SPBM platform. SPBM and virtual machine storage policies are solutions designed to simplify virtual machine storage placement decisions for vSphere administrators.
Virtual SAN is fully integrated with core vSphere enterprise features such as VMware vSphere High Availability (vSphere HA), VMware vSphere Distributed Resource Scheduler™ (vSphere DRS), and VMware vSphere vMotion®.
Its goal is to provide both high availability and scale-out storage functionality. It also can be considered in the context of quality of service (QoS) because virtual machine storage policies can be created to define the levels of performance and availability required on a per–virtual machine basis.


Download out the full What’s New in VMware® Virtual SAN™ technical white paper.

Rating: 5/5

Jan 21

vSphere Distributed Switch – Design and Best Practices

VMworld 2013: Session NET5521 – vSphere Distributed Switch – Design and Best Practices

NOTE: This video is roughly 55 minutes in length so it would be worth blocking out some time to watch it!

Rating: 5/5

Posted in Uncategorized
Jan 08

VMware Releases vCenter Log Insight 1.5

VMware has announced the release of VMware vCenter Log Insight 1.5, which includes new features for enterprise readiness, and support for an additional range of enterprise log sources.

VMware vCenter Log Insight delivers automated log management through aggregation, analytics and search, enabling operational intelligence and enterprise-wide visibility in dynamic hybrid cloud environments.

The bulk of the work done on vCenter Log Insight 1.5 was to make it a more enterprise-ready product.  As an example, VMware has added support for Microsoft’s Active Directory to make vCenter Log Insight easier to integrate into an enterprise environment. This eliminates the need for multiple logins/passwords and allows for seamless integration into an organization’s pre-existing identity management architecture.

Some of the other features added to the product include:

  • Improved Performance for Frequently Executed Queries and Dashboards
  • New Analytics Function: Unique Count (ucount)
  • Improved Content Pack Framework
  • Better Integration with VMware vSphere and vCenter Operations Manager
  • UI based in-place upgrades
  • Easier user deployment types with installation guidance on what capacity each one provides during installation
  • Improved Health Monitoring of the Log Insight Virtual Appliance

VMware said its goal is to be able to collect all operational data in the data center, both structured and unstructured. And to further this goal, the improved content pack framework in vCenter Log Insight 1.5 allows you to produce charts, alerts and dashboards for user-specific logs. It also includes many pre-built content packs for Dell, EMC, and others.

You can download the latest software from the VMware Product Evaluation Center.


Posted in Uncategorized
May 04

VXLAN Series – Multicast Basics – Part 2

Vyenkatesh Deshpande posted May 3, 2013

In the last post here, I provided some details on vSphere hosts configured as VTEPs in a VXLAN deployment. Also, I briefly mentioned that Multicast protocol support is required in the physical network for VXLAN to work. Before I discuss how Multicast is utilized in VXLAN deployment, I want to briefly talk about some of basics on Multicast.

In the diagram below you see three main types of communication modes that are common in a network – Unicast, Broadcast and Multicast.

Basic Multicast

Figure 1

Unicast mode (Fig1-A) is best for one to one communication while broadcast (Fig1-B) is best utilized when message has to be delivered to all nodes in a network. The devices in the network are capable of supporting Unicast and Broadcast traffic. However, when a message has to be delivered to a selected few nodes in the network as shown in Fig1-C, unicast and broadcast modes are not efficient. For example, if Unicast mode is used, the node on the left has to send a message to node 2 first, and then send the same message to node 2 again.

Multicast protocol support in the network allows optimal delivery for one to many communications. Instead of the end nodes sending multiple copies of message, the switches and routers perform that job.

How does Multicast work in IP network?

First of all, a unique IP address range is assigned as Multicast group IP address. It is a Class D address range from to Each address in this range designates a multicast group. Some of the addresses are reserved.

Any node (computer/user) in the network can join a multicast group using Internet Group Management Protocol (IGMP). For example, in Fig 1-C two nodes on the right have joined multicast group address

After IGMP join requests, when an IP datagram with destination IP address of a multicast group is sent, it gets forwarded to every node that has joined that multicast group. For example, in Fig 1-C the node on the left sends a packet with destination IP address as The network then delivers the packet to the two nodes on the right who had joined that multicast group earlier.

The devices in the network (Layer 2 switches and Layer 3 routers) run multicast protocols to support this optimal delivery of packets to the selected group of nodes. The following are some of the key protocols that are used in a multicast supported network

Internet Group Management Protocol (IGMP v1, v2, v3). IGMP manages multicast groups using Query and Report messages.

Multicast routing protocols – PIM – different modes (sparse, dense)

The network devices use these protocols to learn about which nodes have joined which multicast groups and where the nodes are in the network.

When it comes to VXLAN, the multicast support requirements in the physical network are dictated by the number of transport VLAN used in the design. As mentioned in the last post, the transport VLAN carries VXLAN encapsulated traffic. If you are using a single transport VLAN then there is no need for multicast routing protocol (PIM). However, you need the following functions enabled on the switches and routers

– IGMP snooping on Layer 2 Switch
– IGMP Querier on the Router

What is IGMP snooping? And how does it work?

We saw that multicast optimizes the delivery of the packets to the interested nodes. So the question is how does layer 2 network devices know which nodes are interested in which conversations or multicast groups?

The layer 2 switches monitor the IGMP query and report messages to find out which switch ports are subscribed to which multicast group. This functionality of a layer 2 switch is called IGMP snooping. The diagram below shows an example where there are two servers on the right streaming two different webcasts A and B. The users on the left choose to subscribe to a particular webcast by sending IGMP report messages.

IGMP Join request

IGMP Join request

The Layer 2 switch monitors IGMP packets sent by the users and makes entry in the forwarding table about the membership to particular multicast addresses. As you can see that multicast group address is associated with Webcast A and with Webcast B. In this example Port 1 and 2 are members of the multicast group while Port 3 and 4 are members of

The diagram below shows how the Webcast A packets with destination IP address (Orange Arrow) sent to port 10 are only replicated to port 1 and 2 of the switch. Similarly the Webcast B traffic (Green Arrow) is only sent to port 3 and 4. User connected to port 5 is not subscribed to any Webcasts so it won’t receive any multicast traffic.

Multicast Packets

Multicast Packets

This shows how IGMP snooping capability on a physical switch optimizes the multicast packet delivery. Note that in this example each user has joined only one multicast group, but in reality they can join any number of multicast groups.

Why do you need IGMP querier?

IGMP querier is the function of a router and it is important to enable that for a proper IGMP snooping operation on layer 2 switches. We looked at how users join a multicast group by sending IGMP query messages. These messages are sent to the multicast router or IGMP querier. Without an IGMP querier to respond to, users do not send periodic membership requests. As a result, the entries in the layer 2 switch times out and multicast traffic is not delivered.

I hope this clarifies some of the commonly used multicast terminology and how basic things work in multicast. In the next post, I will cover the following things:

– Explain what is the relation between Layer 2 logical network in VXLAN and multicast group.

– When and how multicast is used in VXLAN?

– Explain that not all traffic is multicast in VXLAN deployment.

Here are the links to Part 1, Part 3, Part 4, Part 5

Get notification of these blogs postings and more VMware Networking information by following me on Twitter: @VMWNetworking.

About the Author

Vyenkatesh (Venky) Deshpande – is a Sr. Technical Marketing Manager at VMware and he is focused on the Networking aspects in the vSphere platform and vCloud Networking and Security product. Follow Venky on twitter @VMWNetworking.

Apr 29

VXLAN Series – Different Components – Part 1

Vyenkatesh Deshpande posted April 29, 2013

In the last six months, I have talked to many customers and partners on Virtual eXtensible Local Area Network (VXLAN). One of the things I felt was challenging was how to explain the technology to two different type of audience. On one hand, there are Virtual Infrastructure administrators who want to know what problems this new technology is going to solve for them and what are the use cases. While on the other hand, there are Networking folks who want to dig into packet flows and all the innate protocol level details, how this technology compares with others, and what is the impact of this on the physical devices in the network etc.

The papers that we have made available Network virtualization Design Guide and “NSX Installation Guide”, provides some basic knowledge about the technology, Use cases, and step-by-step deployment instructions. However, some of the detailed packet flow scenarios are not explained in these papers. So I thought it would be a good idea to put together a series of post discussing the packet flows in a VXLAN environment. Also, there are many common questions that I would like to address as part of this series.

To start this series, I will first describe the different components of the VMware’s VXLAN implementation.

VXLAN Components

VXLAN Components

VXLAN Components

The diagram above shows a deployment of two compute clusters that is configured with VXLAN components running on each vSphere host.

VXLAN is an overlay network technology. Overlay network can be defined as any logical network that is created on top of the existing physical networks. VXLAN creates Layer 2 logical networks on top of the IP network. The following two are key traits of an overlay technology:

– It encapsulates original packets into a new header. For example, IPSec VPN, an overlay technology, encapsulates original IP frame in another IP header.

– Communication is typically established between two tunnel end points. For example, in an IPSec based VPN, which runs on the public internet, the tunnels are established between two sites.

When you apply those overlay technology traits to VXLAN, you will see that VXLAN encapsulates original MAC frames in to a UDP header (shown below), and all vSphere hosts participating in VXLAN acts as tunnel end points. They are called Virtual Tunnel Endpoints (VTEPs).



VXLAN – Encapsulation Header

VTEPs are the nodes that provide the encapsulation and de-encapsulation function. When we will go through the detail packet flows it will be clear how these VTEPs encapsulate and de-encapsulate traffic from any virtual machine connected to a VXLAN based Layer 2 logical network or virtual wire. The virtual tunnel endpoint (VTEP) configured on every vSphere host consists of the following three modules:

1) VMware Installation Bundle (VIB) or vmkernel module – VTEP functionality is part of the VDS and is installed as a VMware Installation Bundle (VIB). This module is responsible for VXLAN data path processing, which includes maintenance of forwarding tables and encapsulation and de-encapsulation of packets.

2) vmknic virtual adapter – This adapter is used to carry control traffic, which includes response to multicast join, DHCP, and ARP requests. As with any vmknic, a unique IP address is assigned per host. The IP address is used as the VTEP IP while establishing host-to-host tunnels to carry VXLAN traffic.

3) VXLAN port group – This is configured during the initial VXLAN configuration process. It includes physical NICs, VLAN information, teaming policy, and so on. These port group parameters dictate how VXLAN traffic is carried in and out of the host VTEP through the physical NICs. As shown in the diagram, VLAN 2000 is used as the transport VLAN for VXLAN traffic. The transport VLAN has no relation to the logical Layer 2 networks or virtual wires that you will create.

The configuration of the VTEP on each vSphere host is managed through a central place called vCloud Networking and Security Manager. One of the common questions I get is whether this manager acts as a controller similar to the Openflow controller. The answer is No. In VXLAN there is no special controller or control plane required. So then the question is how in VXLAN a forwarding table is created? In physical switch infrastructure the forwarding table information helps deliver packets to the right destination.

In VXLAN all the learning about the virtual machine MAC address and its association with VTEP IP is performed through the support of physical network. One of the protocols utilized in the physical network is IP multicast. VXLAN makes use of this IP multicast protocol to populate the forwarding tables in the VTEP.

Before we dig into how IP multicast is utilized in VXLAN, in the next blog, we will take a look at some basics on IP Multicast.

Here are the links to Part 2 , Part 3, Part 4, Part 5

Get notification of these blogs postings and more VMware Networking information by following me on Twitter: @VMWNetworking.

About the Author

Vyenkatesh (Venky) Deshpande – is a Sr. Technical Marketing Manager at VMware and he is focused on the Networking aspects in the vSphere platform and vCloud Networking and Security product. Follow Venky on twitter @VMWNetworking.