Across industries, the race is on to digital transformation. It’s all about business innovation and redefinition. The transformations are huge: Tesla isn’t just a car manufacturer; it’s a software business that makes cars. CITI is a software business that makes loans. GE is a software business that makes industrial equipment.
Register for this VMworld 2016 session to learn about the future of VMware NSX.
Like most of the customers we talk with, your business is also going through a transformation. Lots of change. Lots of disruption. Lots of innovation. More apps, representing more services and new business models. More lines of business empowered to make decisions about the IT they’ll use to take their innovations to market. And there’s no doubt that a huge enabler of all of this has been the cloud.
Consider what some of the leading industry pundits are predicting:
- By 2019, the majority of virtual machines (VMs) will be delivered by IaaS providers.
- By 2019, more than 30% of the 100 largest vendors’ new software investments will have shifted from cloud-first to cloud-only.
- By 2020, a corporate “no-cloud” policy will be as rare as a “no-internet” policy is today
- By 2020, 50% of applications running in public cloud environments will be considered mission-critical by the organizations using them (Gartner)
Through all of this, networking is undergoing fundamental change. It’s evolving to support both traditional and 3rd Platform architectures. It’s expanding and becoming more agile and flexible to support tomorrow’s application infrastructures spanning different hypervisors and containers, and living partly on-premises and partly across multiple public clouds.
At the heart of all of this change is VMware NSX. When you consider that just three years ago, VMware NSX did not even exist as a product, it is amazing to see the sheer number of production customers across every market segment and every region across the world.
At VMworld 2016, in Session NET9989-S, join VMware Chief Technology Strategy Officer Guido Appenzeller for a preview into what lies ahead for VMware NSX and network virtualization.
Created by Humair Ahmed on Jul 22, 2016 1:36 PM. Last modified by Humair Ahmed on Jul 25, 2016 2:19 PM.
This design guide is in initial draft status and feedback is welcome for next updated version release.
Please send feedback to firstname.lastname@example.org.
The goal of this design guide is to outline several NSX solutions available for multi-site data center connectivity before digging deeper into the details of the Cross-VC NSX multi-site solution. Learn how Cross-VC NSX enables logical networking and security across multiple vCenter domains/sites and how it provides enhanced solutions for specific use cases. No longer is logical networking and security constrained to a single vCenter domain. Cross-VC NSX use cases, architecture, functionality, deployment models, design, and failure/recovery scenarios are discussed in detail.
This document is targeted toward virtualization and network architects interested in deploying VMware® NSX Network virtualization solution in a vSphere environment.
The design guide addresses the following topics:
- Why Multi-site?
- Traditional Multi-site Challenges
- Why VMware NSX for Multi-site Data Center Solutions
- NSX Multi-site Solution
- Use Cases
- Architecture and Functionality
- Deployment Models
- Design Guidance
- Failure/Recovery scenarios
Cross VC NSX Overview
VMware NSX provides network virtualization technology that decouples the networking services from the underlying physical infrastructure. By replicating traditional networking hardware constructs and moving the network intelligence to software, logical networks can be created efficiently over any basic IP network transport. The software based approach to networking provides the same benefits to the network as server virtualization provided for compute.
Pre-NSX 6.2, although NSX provides the flexibility, agility, efficiency and other benefits of network virtualization, the logical networking and security was constrained to the boundaries of one vCenter domain.
Although it was possible to use NSX with one vCenter domain and stretch logical networking security across sites, the benefits of network virtualization with NSX was still limited to one vCenter domain. Figure 17 below shows multiple vCenter domains which happen to also be at different sites all requiring separate NSX controllers and having isolated logical networking and security.
Thanks to all the contributors and reviewers of this document.
This will also soon be posted on our NSX Technical Resources website (link below):
Feedback and Comments to the Authors and the NSX Solution Team are highly appreciated.
– The VMware NSX Solution Team
Download Multi-site Options and Cross-VC NSX Design Guide.pdf (15.5 MB).
This is the third of a series of 5 demos that show how the NSX Security Model works through several use cases. Don’t just believe what you see, try it yourself for free with VMware Hands-On-Labs (see below):
In this video, we discuss the way in which Neutron, the networking project of OpenStack, and VMware NSX interact. We cover the basic Neutron workflows and their situation as it relates to the application, as well as the corresponding NSX element that is leveraged each time. With this, we aim to describe how NSX ultimately brings stability to Neutron
This is the second of a series of 5 demos that show how the NSX Security Model works through several use cases. Don’t just believe what you see, try it yourself for free with VMware Hands-On-Labs (see below):
This is the first of a series of 5 demos that show how the NSX Security Model works through several use cases. Don’t just believe what you see, try it yourself for free with VMware Hands-On-Labs (see below):
VMware NSX for vSphere, release 6.0.x.
This document covers how one can create security policy rules in VMware NSX. This will cover the different options of configuring security rules either through the Distributed Firewall or via the Service Composer User Interface. It will cover all the unique options NSX offers to create dynamic policies based on the infrastructure context.
Thanks to Francis Guillier, Kausum Kumar and Srini Nimmagadda for helping author this document.
VMware NSX Distributed Firewall (DFW) provides the capability to enforce firewalling functionality directly at the Virtual Machines (VM) vNIC layer. It is a core component of the micro-segmentation security model where east-west traffic can now be inspected at near line rate processing, preventing any lateral move type of attack.
This technical brief gives details about DFW policy rule configuration with NSX. Both DFW security policy objects and DFW consumption model will be discussed in this document.
We assume reader has already some knowledge on DFW and Service Composer functions. Please refer to the appropriate collateral if you need more information on these NSX components.
Distributed Firewall Object Grouping Model
NSX provides the capability to micro-segment your SDDC to provide an effective security posture. To implement micro-segmentation in your SDDC, NSX provides you various ways of grouping VMs and applying security policies to them. This document specifies in detail different ways groupings can be done and details on when you should use one over the other.
Security policy rules can be written in various ways as shown below:
Network Based Policies:
- This is the traditional approach of grouping based on L2 or L3 elements. Grouping can be based on MAC addresses or IP addresses or a combination of both. NSX supports this approach of grouping objects. The security team needs to aware of networking infrastructure to deploy network-based policies. There is a high probability of security rule sprawl as grouping based on dynamic attributes is not used. This method of grouping works great if you are migrating existing rules from a different vendor’s firewall.
When not to use this: In dynamic environments, e.g. Self-Service IT; Cloud automated deployments, where you are adding/deleting of VMs and application topologies at a rapid rate, MAC addressed based grouping approach may not be suitable as there will be delay between provisioning a VM and adding the MAC addresses to the group. If you have an environment with high mobility like vMotion and HA, L3/IP based grouping approaches may not be adequate either.
Infrastructure Based Policies:
- In this approach, grouping is based on SDDC infrastructure like vCenter clusters, logical switches, distributed port groups, etc. An example of this would be, clusters 1 to cluster 4
are earmarked for PCI kind of applications. In such a case, grouping can be done based on cluster names and rules can be enforced based on these groups. Another example would be, if you know which logical switches in your environment are connected to which applications. E.g. App Tier Logical switch contains all VMs pertaining to application ‘X’. The security team needs to work closely with the vCenter administration team to understand logical and physical boundaries.
When not to use this: If there are no physical or logical boundaries in your SDDC environment then this type of approach is not suitable. Also, you need to be very careful where you can deploy your applications. For example, if you would like to deploy a PCI workload to any cluster that has adequate compute resources available; the security posture cannot be tied to a cluster but should move with the application.
Application Based Policies:
- In this approach, grouping is based on the application type (e.g: VMs tagged as “Web_Servers”), application environment (e.g: all resources tagged as “Production_Zone”) and application security posture. The advantage of this approach is that the security posture of the application is not tied down to either network constructs or SDDC infrastructure. Security policies can move with the application irrespective of network or infrastructure boundaries. Policies can be templated and reusable across instances of same types of applications and workloads. You can use variety of mechanisms to group. The security team needs to be aware of only the application that it is trying to secure based on the policies. The security policies follow the application life cycle, i.e. comes alive when the application is deployed and is destroyed when the application is decommissioned.
When not to use this: If the environment is pretty static without mobility and infrastructure functions are properly demarcated. You do not need to use application-based policies.
Application-based policy approach will greatly aid in moving towards a Self-Service IT model. The Security team needs to be only aware of how to secure an application without knowing the underlying topology. Concise and reusable security rules will require application awareness. Thus a proper security posture can be developed via application based policies.
Security-Groups is a container-construct which allows to group vCenter objects into a common entity.
When defining a Security-Groups, multiple inclusion and exclusion can be used as shown in the diagram below:
Download a full VMware NSX DFW Policy Rules Configuration Technical White Paper
VMware NSX for vSphere, release 6.0.x.
This document guides you through the step-by-step configuration and validation of NSX-v for microsegmentation services. Microsegmentation makes the data center network more secure by isolating each related group of virtual machines onto a distinct logical network segment, allowing the administrator to firewall traffic traveling from one segment of the data center to another (east-west traffic). This limits attackers’ ability to move laterally in the data center.
VMware NSX uniquely makes microsegmentation scalable, operationally feasible, and cost-effective. This security service provided to applications is now agnostic to virtual network topology. The security configurations we explain in this document can be used to secure traffic among VMs on different L2 broadcast domains or to secure traffic within a L2 broadcast domain.
Microsegmentation is powered by the Distributed Firewall (DFW) component of NSX. DFW operates at the ESXi hypervisor kernel layer and processes packets at near line-rate speed. Each VM has its own firewall rules and context. Workload mobility (vMotion) is fully supported with DFW, and active connections remain intact during the move.
This paper will guide you through two microsegmentation use cases and highlight steps to implement
them in your own environment.
Use Case and Solution Scenarios
This document presents two solution scenarios that use east-west firewalling to handle the use case of
securing network traffic inside the data center. The solution scenarios are:
- Scenario 1: Microsegmentation for a three-tier application using three different layer-2 logical segments (here implemented using NSX logical switches connected over VXLAN tunnels):
In Scenario 1, there are two VMs per tier, and each tier hosts a dedicated function (WEB / APP / DB
services). Traffic protection is provided within the tier and between tiers. Logical switches are used to
group VMs of same function together.
- Scenario 2: Microsegmentation for a three-tier application using a single layer-2 logical segment:
In Scenario 2, all VMs are located on same tier. Traffic protection is provided within tier and per function (WEB/ APP/ DB services). Security Groups (SG) are used to logically group VMs of same function together.
For both Scenario 1 and Scenario 2, the following security policies are enforced:
For Scenario 1, a logical switch object is used for source and destination fields. For Scenario 2, a Service Composer / Security Group object is used for source and destination fields. By using these vCenterdefined objects, we optimize the number of needed firewall rules irrespective of number of VMs per tier (or per function).
NOTE: TCP port 1433 simulates the SQL service.
Two ESXi hosts in the same cluster are used. Each host has following connectivity to the physical
- one VLAN for management, vMotion, and storage. Communication between the ESXi host and the NSX Controllers also travels over this VLAN.
- one VLAN for data traffic: VXLAN-tunneled, VM-to-VM data traffic uses this VLAN.
- Web-01, app-01 and db-01 VMs are hosted on the first ESXi host.
- Web-02, app-02 and db-02 VMs are hosted on the second ESXi host.
The purpose of this implementation is to demonstrate complete decoupling of the physical infrastructure from the logical functions such as logical network segments, logical distributed routing and DFW.
In other words, microsegmentation is a logical service offered to an application infrastructure irrespective of physical component. There is no dependency on where each VM is physically located.
The VMware NSX network virtualization platform is a critical pillar of VMware’s Software Defined Data Center (SDDC) architecture. NSX network virtualization delivers for networking what VMware has already delivered for compute and storage. In much the same way that server virtualization allows operators to programmatically create, snapshot, delete and restore software-based virtual machines (VMs) on demand, NSX enables virtual networks to be created, saved and deleted and restored on demand without requiring any reconfiguration of the physical network.
The result fundamentally transforms the data center network operational model, reduces network provisioning time from days or weeks to minutes and dramatically simplifies network operations.
Due to the critical role NSX plays within an organization, hardening of the product along with secure topology will reduce the risk an organization may face. This document is intended to provide configuration information and topology recommendations to ensure a more secure deployment.
This paper is a draft document which covers some fundamentals of how one can securely deploy network virtualization with NSX. Updated with correct document.
NSX Traffic [Control, Management, and Data]
The main components of NSX include the NSX Manager, NSX Edge/Gateway, NSX Controllers, and NSX vSwitch. Great care must be given toward the placement and connectivity of these components within an organization’s network. NSX functions can be grouped into three categories: management plane, control plane, and data plane.
The consumption of NSX can be driven directly via the NSX manager UI. In a vSphere environment this is available via the vSphere web interface. Typically end-users tie in network virtualization to their cloud management platform for deploying applications. NSX provides a rich set of integration into virtually any CMP via the REST API. Out of the box integration is also available through VMware vCloud Automation Center.
The NSX management plane is built by the NSX Manager. The NSX manager provides the single point of configuration and the REST API entry-points in a vSphere environment for NSX. The NSX Manager is also the integration point with vCenter.
Network traffic to and from the NSX Manager should be restricted and it’s recommended that it be placed on a management network where access is limited.
Access to the NSX manager utilizes a web redirect to only allow access via HTTPS.
Traffic from the NSX manager to other components such as vCenter and the ESXi is encrypted. These safe guards reduce some of the risk to the NSX manager, but it is recommended that it be separated from other traffic via physical or VLAN separation, at a minimum. The VMware vSphere Hardening Guides (http://www.vmware.com/security/hardening-guides.html) can be used to further explore protection of the management network.
The NSX Controller is the heart of the control plane. In a vSphere-optimized environment where VMware’s virtual distributed switches (VDS) are deployed, the controllers enable multicast free network virtualization and control plane programming of elements that enable logical distributed routing and logical network traffic within and across hypervisors.
In all cases, the controller is purely a part of the control plane and does not have any data plane traffic passing through it. The controller nodes are also deployed in a cluster of odd members in order to enable high-availability and scale. Any failure of the controller nodes does not impact any existing data plane traffic.
These communications does not carry any sensitive application data, but it is required for NSX to work properly. As of version 6.0.4 of NSX, controller to controller communication is unencrypted along with hypervisor to controller communication. Hence, it’s recommended that it be separated from other traffic via physical or VLAN separation, at a minimum. No user machines should be on this network.
The NSX Data plane consists of the NSX vSwitch. The vSwitch in NSX for vSphere is based on the vSphere Distributed Switch (VDS) with additional components to enable rich services. The add-on NSX components include kernel modules (VIBs) which run within the hypervisor kernel providing services such as distributed routing, distributed firewall and enable VXLAN bridging capabilities.
The NSX vSwitch (VDS) abstracts the physical network and provides access-level switching in the hypervisor. It is central to network virtualization because it enables logical networks that are independent of physical constructs such as VLAN. Some of the benefits of the VDS are:
- Support for overlay networking leveraging the VXLAN and centralized network configuration. Overlay networking enables the following capabilities:
- o Creation of a flexible logical layer 2 (L2) overlay over existing IP networks on existing physical infrastructure without the need to re-architect any of the data center networks
o Provisioning of communications (east–west and north–south) while maintaining isolation between tenants
o Application workloads and virtual machines that are agnostic of the overlay network and operate as if they were connected to a physical L2 network
- NSX vSwitch facilitates massive scale of hypervisors.
- Multiple features—such as Port Mirroring, NetFlow/IPFIX, Configuration Backup and Restore, Network Health Check, QoS, and LACP—provide a comprehensive toolkit for traffic management, monitoring and troubleshooting within a virtual network.
Additionally, the data plane also consists of gateway devices that can either provide L2 bridging from the logical networking space (VXLAN) to the physical network (VLAN).
The gateway device is typically an NSX Edge virtual appliance. NSX Edge offers L2, L3, perimeter firewall, load balancing and other services such as SSL VPN, DHCP, etc
Topology and the NSX Manager Virtual Machine
The NSX Manager virtual machine (VM) is part of the management plane, certain considerations must be taken into account when deciding where to install and connect the VM.
1. Placement: Best practices dictate that the NSX Manager should be placed in a segmented and secured network. Since the NSX manager and vCenter are in continuous communication, it is recommended they be placed on the same network. Typically, the NSX manager and vCenter are placed on a management network where access is limited to specific users and/or systems. The management network should not contain any user or general network traffic.
2. Physical and network security: The following table provide ports use for communication with the NSX Manager. If you are securing the NSX manager from other network services, make sure the appropriate ports are open.
Download a full Securing VMware® NSX Technical White paper
VMware NSX Hardening Guide Authors: Pravin Goyal, Greg Christopher, Michael Haines, Roberto Mari, Kausum Kumar, Wade Holmes
This is the Version 1.6 of the VMware® NSX for vSphere Hardening Guide.
This guide provides prescriptive guidance for customers on how to deploy and operate VMware® NSX in a secure manner.
Acknowledgements to the following contributors for reviewing and providing feedback to various sections of the document: Kausum Kumar, Roberto Mari, Scott Lowe, Ben Lin, Bob Motanagh, Dmitri Kalintsev, Greg Frascadore, Hadar Freehling, Kiran Kumar Thota, Pierre Ernst, Rob Randell, Roie Ben Haim, Yves Fauser
Guide is provided in an easy to consume spreadsheet format, with rich metadata (i.e. similar to existing VMware vSphere Hardening Guides) to allow for guideline classification and risk assessment.
Feedback and Comments to the Authors and the NSX Solution Team can be posted as comments to this community Post (Note: users must login on vmware communities before posting a comment).
Download a full NSX-v Security Hardering Guide