Sep 01

NSX Design Guide for Small Data Centers

Executive Summary

VMware NSX is the network virtualization technology that decouples the networking services from the underlying physical infrastructure. VMware NSX allows for a new software based approach to networking that provides the same operational model as a virtual machines (VM). Virtual networks can easily be created, modified, backed-up and deleted within minutes.

By providing the physical networking constructs in software, VMware NSX provides similar benefits as server virtualization did with VMs. Businesses can see the impact in terms of increased efficiency, effective resources utilization, productivity, flexibility, agility and cost savings.

Document Structure

This document will present the audience with the NSX introduction, business use-cases and overview of design in Large and Medium data centers. The beginning of the document serves as a refresher to those who are already familiar with the NSX design and deployment aspects.
The document goes on to present a NSX for small data center, its relevance, and what are the main building blocks of designing NSX in small data centers.

The document talks about popular NSX deployment models in small data centers, gives details around protecting and designing based on the individual NSX components, like NSX ESG and DLR Control VM etc.
Towards the end of the document, it talks about the growth option to take NSX even further and grow it into the medium and large scale deployment.

Introduction

NSX has emerged as the leading software platform to virtualize network and networking services. Many customers have deployed NSX to run their production and non-production workload to get the benefits that
comes with virtual networks and software defined network approaches. NSX has been deployed from small to medium to large sizes of data centers to enable a wide-range of use-cases.

There are situations where large enterprises have also deployed NSX in their small data centers islands within the overall large environment. There are also situations where small and medium businesses (SMBs) are deploying NSX with small number of hosts to take advantage of network virtualization. Regardless of the size of the enterprise, small data center is a viable option and relevant for all type of customers, enterprises and businesses.

The NSX Reference Design Guide discusses design aspects to deploy NSX in all data center sizes. This document uses the NSX Reference Design Guide as a baseline and provides additional and/or supportive guidance to successfully run NSX in SMB Data Centers. It is assumed that readers have gone through the concepts and design options discussed in NSX reference design guide.

In addition, readers are highly encouraged to take a look at Software Defined Data Center (SDDC) VMware
Validated Design Guide (VVD) that provides most comprehensive and extensively tested blueprint to build and operated SDDC.

NSX Customer Use Cases

NSX has been widely accepted and deployed in production by many customers. Figure 1 lists some of the most important use cases that customers are deploying NSX for.

Packet-Header

Figure 1. – NSX Use Cases

Security

NSX can be used to create a secure infrastructure, which can create a zero-trust security model. Every virtualized workload can be protected with a full stateful firewall engine at a very granular level. Security can be based on constructs such as MAC, IP, ports, vCenter objects, security tags, active directory groups, etc. Intelligent dynamic security grouping can drive the self-adaptive security posture within the infrastructure.

Automation

VMware NSX provides a full RESTful API to consume networking, security and services, which can be used to drive automation within the infrastructure. In small data centers, automation tools like REST API and PowerNSX can be useful to programmatically configure network and security services, or to pull the information from VMware NSX deployments for simple operations tasks.

Application Continuity

NSX provides a way to easily extend networking and security up to eight vCenters either within or across data centers. NSX can extend or stretch L2 and L3 networks across data centers in distributed fashion. NSX also ensure that the security policies are consistent across those stretched networks and hence provide a seamless, distributed and available Network and Security overlay. All of it is done using software based technologies, without requiring expensive hardware.

NSX for vSphere Components

vSphere is the foundation for NSX for vSphere (referred to as NSX throughout this document) deployment. It is important to have good understanding of what vSphere and NSX components are involved into the design. For a successful NSX deployment, it is imperative to have a good vSphere deployment in place with proper vSphere clustering, compute, network and storage. For detailed discussions on these topics, the reader can refer to the NSX Reference Design Guide.

Figure 2 shows various layers of NSX for vSphere architecture based on the role being performed by each NSX components. From a very high level, the NSX solution architecture can be seen as divided between management, control and data planes. In the traditional networking model, the control and data plane is combined together.
NSX and other software defined networking architectures follow an approach where the data plane is separated from the control plane. This approach provides the advantage of decoupling from hardware dependencies, and allows all networking services to be virtualized following the same operational model that compute and storage virtualization has been providing for years.

Packet-Header

NSX layers for vSphere Architecture

NSX in Small Data Center Use-Cases

One must understand that Small Data Center (DC) does not mean that it is only relevant for small customers. Many large enterprises deploy NSX with small footprint or small number of ESXi resources in the beginning and then they expand to larger footprint. This could be due to number of different reasons for example budget, staffing or simply because of small scale deployment that they would have in the beginning. The advantage is that even if NSX is deployed in small footprint, it can easily grow into a medium or large size deployment.
On a broad scale, Small Data Center use-cases can be divided based on business function and application that are being deployed.

Functional Level Use Cases

Organizations deploy NSX with small footprint in specific functional areas or groups that they have. For instance

  • Disaster recovery and/or avoidance
  • Pre-Prod vs Test environments
  • Compliance / DMZ
  • Business units with their own operational model
  • Etc.

Application Level Use Cases

Many customers deploy NSX in small DCs to tackle one or more application level use-cases that they have. For instance

  • VDI
  • Load Balancer
  • Agentless Antivirus (AV)
  • Etc.

NSX Advantage for Small Data Centers

Organizations adopt NSX not just because of its technical strength and advantages that they gain while deploying networking services in software. They also get the advantage in terms of its simplicity, ease of use and operational flexibility. Some of these advantages are highlighted here.

Simplicity and Modularity

Small customers like the idea of its simplicity and modularity, where they have peace of mind to grow and add more features as they increase the capacity or user base. They do not need to purchase all the networking hardware upfront with lots of unknown down the road. NSX provides those customers software based networking services that they can spin up anytime they want without incurring additional hardware cost.

Procurement

Customer are also thrilled because all the networking and security services are bundled within the same product and platform, so they do not need to worry about contacting multiple vendors not just for purchase but also for support agreement and licenses procurement and cost. Customer are getting everything with the NSX under one roof.

Ease of Operations

Majority of the customers are already familiar with the operational model vSphere has provided them for years. NSX is seamlessly integrated within the same model. It enhances their operational model and sits nicely on top of it. Hence the learning curve to adopt the new technology is minimal.

Download NSX Design Guide for Small Data Centers.

Rating: 5/5


Jun 13

Announcing the What’s New in vSphere 6.7 Whitepaper

By Adam Eckerle

With the recent announcement and general availability of vSphere 6.7 we’ve seen an immense amount of interest. With each new version of vSphere we continue to see customers start their testing of new releases earlier and earlier in the release cycle. vSphere 6.7 brings a number of important new features that vSphere Administrators as well architects and business leaders are excited about.

vSphere 6.7 focuses on simplifying management at scale, securing both infrastructure and workloads, being the universal platform for applications, and providing a seamless hybrid cloud experience. Features such as Enhanced Linked Mode with embedded Platform Services Controllers bring simplicity back to vCenter Server architecture. Support for TPM 2.0 and Virtualization Based Security provide organizations with a secure platform for both infrastructure and workloads. The addition of support for RDMA over Converged Ethernet v2 (RoCE v2), huge pages, suspend/resume for vGPU workloads, persistent memory, and native 4k disks makes shows that the hypervisor is not a commodity and vSphere 6.7 enables more functionality and better performance for more applications.

For those wanting a deep dive into the new features and functionality, I’m happy to announce the availability of the What’s New in vSphere 6.7 whitepaper. This paper is a consolidated resource that discusses and illustrates the key new features of vSphere 6.7 and their value to vSphere customers. The What’s New with vSphere 6.7 whitepaper can be found on the vSphere product page in the Resources section or can be downloaded directly here. After reading through this paper you should have a very good grasp on the key new features and how they will help your infrastructure and business.

Finally, we have a new collection of vSphere 6.7 resources on vSphere Central to make setting up and using these new features even easier. There are also some walkthroughs on upgrading. You can see all of the currently available resources on the vSphere 6.7 Technical Assets page.

Download What’s New in vSphere 6.7 Whitepaper.

About the Author

Adam Eckerle manages the vSphere Technical Marketing team in the Cloud Platform Business Unit at VMware. This team is responsible for vSphere launch, enablement, and ongoing content generation for the VMware field, Partners, and Customers. In addition, Adam’s team is also focused on preparing Customers and Partners for vSphere upgrades through workshops, VMUGs, and other events.

Rating: 5/5


Apr 17

What’s new with vSphere 6.7 Core Storage

By Jason Massae
VXLAN Components

What’s new with vSphere 6.7 Core Storage


Announced today, vSphere 6.7, and several new features and enhancements to further the advancement of storage functionality are included. Centralized, shared storage remains the most common storage architecture used with VMware installations despite the incredible adoption rate of HCI and vSAN. As such, VMware remains committed to the continued development of core storage and Virtual Volumes, and with the release of vSphere 6.7, this truly shows. The 6.7 version marks a major vSphere release, with many new capabilities to enhance the customer experience. From space reclamation to supporting Microsoft WSFC on VVols, this release is definitely feature rich! Below are summaries of what is included in vSphere 6.7, and you can find more detail on each feature on the VMware storage and availability technical document repository: StorageHub.

Configurable Automatic UNMAP

Automatic UNMAP was released with vSphere 6.5 with a selectable priority of none or low. Storage vendors and customers have requested higher, configurable rates rather than a fixed 25MBps. With vSphere 6.7 we’ve added a new method, “fixed” which allows you to configure an automatic UNMAP rate between 100MBps and 2000MBps, configurable both in the UI and CLI.

VXLAN Components

Configurable Automatic UNMAP

UNMAP for SESparse

SESparse is a sparse virtual disk format used for snapshots in vSphere as a default for VMFS-6. In this release, we are providing automatic space reclamation for VM’s with SESparse snapshots on VMFS-6. This only works when the VM is powered on and only affect the top-most snapshot.

Support for 4K native HDD

Customers may now deploy ESXi on servers with 4Kn HDDs used for local storage (SSD and NVMe drives are currently not supported). We are providing a software read-modify-write layer within the storage stack allowing the emulation of 512B sector drives. ESXi continues to expose 512B sector VMDKs to the guest OS. Servers having UEFI BIOS can boot from 4Kn drives.

XCOPY enhancement

XCOPY is used to offload storage-intensive operations such as copying, cloning, and zeroing to the storage array instead of the ESXi host. With the release of vSphere 6.7, XCOPY will now work with specific vendor VAAI primitives and any vendor supporting the SCSI T10 standard. Additionally, XCOPY segments and transfer sizes are now configurable.

VXLAN Components

XCOPY enhancement

VVols enhancements

As VMware continues the development of Virtual Volumes, in this release we have added support for IPv6 and SCSI-3 persistent reservations. With end to end support of IPv6, this enables organizations, including government, to implement VVols using IPv6. With SCSI-3 reservations, this substantial feature allows shared disks/volumes between virtual machines across nodes/hosts. Often used for Microsoft WSFC clusters, with this new enhancement it allows for the removal of RDMs!

Increased maximum number of LUNs/Paths (1K/4K LUN/Path)

The maximum number of LUNs per host is now 1024 instead of 512 and the maximum number of paths per host is 4096 instead of 2048. Customers may now deploy virtual machines with up to 256 disks using PVSCSI adapters. Each PVSCSI adapter can support up to 64 devices. Devices can be virtual disks or RDMs. A major change in 6.7 is the increased number of LUNs supported for Microsoft WSFC clusters. The number increased from 15 disks to 64 disks per adapter, PVSCSI only. This changes the number of LUNs available for a VM running MICROSOFT WSFC from 45 to 192 LUNs.

VMFS-3 EOL

Starting with vSphere 6.7, VMFS-3 will no longer be supported. Any volume/datastore still using VMFS-3 will automatically be upgraded to VMFS-5 during the installation or upgrade to vSphere 6.7. Any new volume/datastore created going forward will use VMFS-6 as the default.

VXLAN Components

VMFS-3 EOL

Support for PMEM /NVDIMMs

Persistent Memory or PMem is a type of non-volatile DRAM (NVDIMM) that has the speed of DRAM but retains contents through power cycles. It’s a new layer that sits between NAND flash and DRAM providing faster performance and it’s non-volatile unlink DRAM.

VXLAN Components

Support for PMEM /NVDIMMs

Intel VMD (Volume Management Device)

With vSphere 6.7, there is now native support for Intel VMD technology to enable the management of NMVe drives. This technology was introduced as an installable option in vSphere 6.5. Intel VMD currently enables hot-swap management, as well as NVMe drive, LED control allowing similar control used for SAS and SATA drives.

Intel VMD (Volume Management Device)

Intel VMD (Volume Management Device)

RDMA (Remote Direct Memory Access) over Converged Ethernet (RoCE)

This release introduces RDMA using RoCE v2 support for ESXi hosts. RDMA provides low latency, and higher-throughput interconnects with CPU offloads between the end-points. If a host has RoCE capable network adaptor(s), this feature is automatically enabled.

VXLAN Components

RDMA (Remote Direct Memory Access) over Converged Ethernet (RoCE)

Para-virtualized RDMA (PV-RDMA)

In this release, ESXi introduces the PV-RDMA for Linux guest OS with RoCE v2 support. PV-RDMA enables customers to run RDMA capable applications in the virtualized environments. PV-RDMA enabled VMs can also be live migrated.

iSER (iSCSI Extension for RDMA)

Customers may now deploy ESXi with external storage systems supporting iSER targets. iSER takes advantage of faster interconnects and CPU offload using RDMA over Converged Ethernet (RoCE). We are providing iSER initiator function, which allows ESXi storage stack to connect with iSER capable target storage systems.

SW-FCoE (Software Fiber Channel over Ethernet)

In this release, ESXi introduces software-based FCoE (SW-FCoE) initiator than can create FCoE connection over Ethernet controllers. The VMware FCoE initiator works on lossless Ethernet fabric using Priority-based Flow Control (PFC). It can work in Fabric and VN2VN modes. Please check VMware Compatibility Guide (VCG) for supported NICs.

VXLAN Components

SW-FCoE (Software Fiber Channel over Ethernet)

It is plain to see why vSphere 6.7 is such a major release with so many new storage-related improvements and features. These are just highlights, more detail may be found by heading over to StorageHub and review the vSphere 6.7 Core Storage section.

Download vSphere 6.7 Core Storage.

About the Author

Jason is the Core Storage Technical Marketing Architect for the Storage and Availability Business Unit at VMware. Before joining VMware, he came from one of the largest flash and memory manufactures in the world. There he architected and lead global teams in virtualization strategies for IT. Also working with the storage business unit, he helped test and validate SSDs for VMware and vSAN. Now his primary focus is core storage for vSphere and vSAN.

Rating: 5/5


May 15

vSphere 6.5 Upgrade Considerations Part-1

Emad Younis posted May 15, 2017.
The release of vSphere 6.5 in November 2016 introduced many new features and enhancements. These include the vCenter Server Appliance (VCSA) now becoming the default deployment. vCenter Server native high availability, which protects vCenter Server from application failure. Built-in File-Based backup and restore allows customers the ability to backup their vCenter Server from the VAMI or by API. The VCSA restore can simply be done by mounting the original ISO used to deploy the VCSA and selecting the restore option. These features and more are exclusive only to the vCenter Server Appliance. The new HTML5 vSphere Client is making its first official product debut with vSphere 6.5.

Did someone say security? We now have better visibility of vSphere changes with actionable logging. VM Encryption allows the encryption of a virtual machine, including disks and snapshots. Secure Boot for ESXi ensures that only digitally signed code runs on the hypervisor. Secure Boot for VM’s is as simple as checking a box. We’ve only begun to scratch the surface of all the new vSphere 6.5 features.

Packet-Header

vCenter-6.5-Features

Product Education

As you start preparing for your vSphere 6.5 upgrade, a checklist will be the run book used to ensure its success. The upgrade process can be divided into three phases:

Phase 1: Pre-upgrade – all the upfront work that should be done before starting an upgrade.
Phase 2: Upgrade – mapping the steps of each component that will be upgraded.
Phase 3: Post-upgrade – validation to ensure everything went according to plan.

The first part of any successful upgrade is determining the benefits of the new features and the value add they will provide to your business. Next is getting familiar with these new features and how they will be implemented in your environment. The following list will get you started learning each of the new vSphere 6.5 features and their benefits.

Another consideration to getting familiar with the new features and upgrade process is the hands on approach in a lab environment. If you have a lab environment at your disposal, try building it as close to your production environment as possible to simulate both the upgrade process and new feature implementation. If a lab environment is not available, there are options like VMware’s Workstation or Fusion if you have the resources to run them. Last, but not least, there is also the Hands on Labs that do not require any resources and provide a guided approach. No matter which option you select, the key is getting familiar and comfortable with the upgrade process.

Health Assessment

crn-products-of-the-year-2016-400.jpg Doing a health assessment of your current environment is critical. Nothing is worse than being in the middle of an upgrade and having to spending hours troubleshooting an issue only to find out it was related to a misconfiguration with something as simple as DNS or NTP. Another advantage to doing a health assessment is discovering wasted resources. For example, virtual machines that are no longer needed but have yet to be decommissioned. The health assessment should cover all components (Compute, Storage, Network, 3rd party) that interact with your vSphere environment. Please consult with your compute, storage, and network vendors for health assessment best practices and tools. Environmental issues are high on the list when it comes to upgrade show stoppers. The good news is that they can be prevented.

There are also VMware and community tools that can help by providing reports on your current environment. Most of these tools come with a 60-day evaluation period, which is enough time to get the information needed. When using community tools please keep in mind they are not officially supported by VMware. Finally, there is also the VMware vSphere health check done by a certified member of VMware’s professional services team. Check with your VMware representative for more information.

Conducting the health assessment could lead to discovering an issue that requires the help of support and opening a ticket. Do not proceed with the upgrade until all open support tickets have been resolved. There are instances where an issue can be fixed by applying a patch or an update, but make sure that any environmental problems have completely been resolved prior to proceeding. This not only includes VMware support tickets, but also compute, storage, network, and 3rd party that interact with your vSphere environment.

Important Documents

Now that we’ve learned about the features and completed a health assessment of our current vSphere environment, it’s time to start mapping out the upgrade process. The first step is looking at important documents like the vSphere 6.5 documentation, product release notes, knowledge base articles, and guides. Each of these documents have pieces of information which are vital to ensuring a successful upgrade. Product release notes, for example, provide information such as what’s new but also information about upgrades, any known issues, and all key pieces of information. Reading the vSphere 6.5 upgrade guide will give you an understanding of the upgrade process. The VMware compatibility guide and Product interoperability matrices will ensure components and upgrade paths are supported. Here is a breakdown of the important vSphere 6.5 documentation that should be viewed prior to upgrading.

Packet-Header

vSphere 6.5Documents

Product Release Notes

Knowledge Base Articles

Guides

Documentation

Upgrades need to be done with a holistic view from the hardware layer all the way to the application layer. With this philosophy in mind, a successful upgrade requires advance prep work to be done to avoid any potential roadblocks. Things like health assessments shouldn’t only be done when preparing for an upgrade, but also routinely. Think of it as a doctor’s visit for your environment and getting a clean bill of health. vSphere 6.5 has been released now for six months and since then four patches are now available providing bug fixes and product updates. The HTML5 vSphere Client now has added features in the release of vSphere 6.5.0 patch b and vSAN easy install is available in 6.5.0 patch d. This agile release of patches means customers no longer need to wait on the first update to consider upgrading to vSphere 6.5. The next few blog posts in this series will cover mapping out the upgrade process whiteboard style, architecture considerations for the vSphere Single Sign-On domain, migration, and upgrade paths.

At this point it is worth noting that the vSphere upgrade process can seem complex if not overwhelming, especially for our customers who use other tools that depend on vSphere and vCenter Server. We hear you. VMware is certainly working to make this better. I hope to be able to write about those improvements in the future. Until then you have upgrade homework to do!

About the Author

Emad Younis is a Staff Technical Marketing Architect and VCIX 6.5-DCV working in the Cloud Platform Business Unit, part of the R&D organization at VMware. He currently focuses on the vCenter Server Appliance, vCenter Server Migrations, and VMware Cloud on AWS. His responsibilities include generating content, evangelism, collecting product feedback, and presenting at events. Emad can be found blogging on emadyounis.com or on Twitter via @emad_younis.

Rating: 5/5


Mar 11

VMware NSX Micro-segmentation Day 1 Book Available

VMware NSX Micro-segmentation

VMware NSX Micro-segmentation Day 1 is a concise book that provides the necessary information to guide organizations interested in bolstering their security posture through the implementation of micro-segmentation. VMware NSX Micro-segmentation Day 1 highlights the importance of micro-segmentation in enabling better data center cyber hygiene. It also provides the knowledge and guidance needed to effectively design and implement a data center security strategy around micro-segmentation.

VMware NSX Micro-segmentation covers the following topics:

  • Micro-segmentation Definition
  • Micro-segmentation and Cybersecurity standards
  • NSX components enabling micro-segmentation
  • Design considerations for micro-segmentation
  • Creating a grouping framework for micro-segmentation
  • Policy creation tools for micro-segmentation

Download

So be sure to download a copy today and learn more about micro-segmentation and how to make it a foundational part of your security strategy. If you are attending RSA 2017, there will be promotional copies being handed out at the VMware booth, so be sure to stop by!

Download a full VMware NSX Micro-segmentation Day 1 Book.

Rating: 5/5


Mar 11

VMware vSphere Encrypted vMotion Architecture, Performance and Best Practices

Executive Summary

With the rise in popularity of hybrid cloud computing, where VM-sensitive data leaves the traditional IT environment and traverses over the public networks, IT administrators and architects need a simple and secure way to protect critical VM data that traverses across clouds and over long distances.

The Encrypted vMotion feature available in VMware vSphere® 6.5 addresses this challenge by introducing a software approach that provides end-to-end encryption for vMotion network traffic. The feature encrypts all the vMotion data inside the vmkernel by using the most widely used AES-GCM encryption standards, and thereby provides data confidentiality, integrity, and authenticity even if vMotion traffic traverses untrusted network links.

Experiments conducted in the VMware performance labs using industry-standard workloads show the following:

  • vSphere 6.5 Encrypted vMotion performs nearly the same as regular, unencrypted vMotion.
  • The CPU cost of encrypting vMotion traffic is very moderate, thanks to the performance optimizations added to the vSphere 6.5 vMotion code path.
  • vSphere 6.5 Encrypted vMotion provides the proven reliability and performance guarantees of regular, unencrypted vMotion, even across long.

Introduction

VMware vSphere® vMotion® [1] provides the ability to migrate a running virtual machine from one vSphere host to another, with no perceivable impact to the virtual machine’s performance. vMotion brings enormous benefits to administrators—it reduces server downtime and facilitates automatic load-balancing.

During migration, the entire memory and disk state associated with a VM, along with its metadata, are transferred over the vMotion network. It is possible during VM migration for an attacker with sufficient network privileges to compromise a VM by modifying its memory contents during the transit to subvert the VM’s applications or its guest operating system. Due to this possible security risk, VMware highly recommended administrators use an isolated or secured network for vMotion traffic, separate from other datacenter networks such as the management network or provisioning network. This protected the VM’s sensitive data as it traversed over a secure network.

Even though this recommended approach adds slightly higher network and administrative complexity, it works well in a traditional IT environment where the customer owns the complete network infrastructure and can secure it. In a hybrid cloud, however, workloads move dynamically between clouds and datacenters over secured and unsecured network links. Therefore, it is essential to secure sensitive vMotion traffic at the network endpoints. This protects critical VM data even as the vMotion traffic leaves the traditional IT environment and traverses over the public networks.

vSphere 6.5 introduces Encrypted vMotion, which provides end-to-end encryption of vMotion traffic and protects VM data from eavesdropping occurrences on untrusted network links. Encrypted vMotion provides complete confidentiality, integrity, and authenticity of the data transferred over a vMotion network without any requirement for dedicated networks or additional hardware.

The sections that follow describe:

  • vSphere 6.5 Encrypted vMotion technology and architecture
  • How to configure Encrypted vMotion from the vSphere Client
  • Performance implications of encrypting vMotion traffic using real-life workload scenarios
  • Best practices for deployment

Encrypted vMotion Architecture

vMotion uses TCP as the transport protocol for migrating the VM data. To secure VM migration, vSphere 6.5 encrypts all the vMotion traffic, including the TCP payload and vMotion metadata, using the most widely used AES-GCM encryption standard algorithms, provided by the FIPS-certified vmkernel vmkcrypto module.
Workflow Encrypted vMotion

Encryption Protocol

Encrypted vMotion does not rely on the Secure Sockets Layer (SSL) or Internet Protocol Security (IPsec) technologies for securing vMotion traffic. Instead, it implements a custom encrypted protocol above the TCP layer. This is done primarily for performance, but also for reasons explained below.
SSL is compute intensive and completely implemented in user space, while vMotion, which constitutes core ESXi, executes in kernel space. This means, if vMotion were to rely on SSL, each encryption/decryption call would need to traverse across kernel and user spaces, thereby resulting in excessive performance overhead. Using the encryption algorithms provided by the vmkernel vmkcrypto module enables vMotion to avoid such a performance penalty.

Although IPSec can be used to secure vMotion traffic, its usability is limited in the vSphere environment because ESXi hosts support IPsec only for IPv6 traffic, but not for IPv4 traffic. Besides, implementing a custom protocol above the TCP layer gives vMotion the ability to create the appropriate number of vMotion worker threads, and coordinate efficiently among them to spread the encryption/decryption CPU load across multiple cores.

Download VMware vSphere Encrypted vMotion Architecture, Performance and Best Practices.

Rating: 5/5


Mar 11

DRS Performance in VMware vSphere 6.5

Introduction

VMware vSphere® Distributed Resource Scheduler™ (DRS) is more than a decade old and is constantly innovating with every new version. In vSphere 6.5, DRS comes with many new features and performance improvements to ensure more efficient load balancing and VM placement, faster response times, and simplified cluster management.
In this paper, we cover some of the key features and performance improvements to highlight the more efficient, faster, and lighter DRS in vSphere 6.5.

New Features

Predictive DRS
Historically, vSphere DRS has been reactive—it reacts to any changes in VM workloads and migrates the VMs to distribute load across different hosts. In vSphere 6.5, with VMware vCenter Server® working together with VMware vRealize® Operations™ (vROps), DRS can act upon predicted future changes in workloads. This helps DRS migrate VMs proactively and makes room in the cluster to accommodate future workload demand.
For example, if your VMs’ workload is going to spike at 9am every day, predictive DRS will be able to detect this pattern before-hand based on historical data from vROPs, and can prepare the cluster resources by using either of the following techniques:

  • Migrating the VMs to different hosts to accommodate the future workload and avoid host over-commitment
  • Bringing back a new host from stand-by mode using VMware vSphere® Distributed Power Management™ (DPM) to accommodate the future demand

How It Works

To enable predictive DRS, you need to link vCenter Server to a vROps instance (that supports predictive DRS), which monitors the resource usage pattern of VMs and generates predictions. Once vROps starts monitoring VM workloads, it generates predictions after a specified learning period. The generated predictions are then provided to vCenter Server for DRS to consume.

Once the VMs’ workload predictions are available, DRS evaluates the demand of a VM based on its current resource usage and predicted future resource usage.

    Demand of a VM = Max (current usage, predicted future usage)

Considering the maximum of current and future resource usage ensures that DRS does not clip any VM’s current demand in favor of its future demand. For the VMs which do not have predictions, DRS computes resource
demand based on only the current resource usage.

Look Ahead Interval

The predictions that DRS gets from vROps are always for a certain period of time, starting from the current
time. This period is known as the “look ahead interval” for predictive DRS. This is by default 60 minutes starting from the current time, which means, by default the predictions will always be for the next one hour. So if there is any sudden spike that is going to happen in the next one hour, predictive DRS will detect it and will prepare the cluster to handle it.

Network-Aware DRS

Traditionally, DRS has always considered the compute resource (CPU and memory) utilizations of hosts and VMs for balancing load across hosts and placing VMs during power-on. This generally works well because in many cases, CPU and memory are the most important resources needed for good application performance.
However, since network availability is not considered in this approach, sometimes this results in placing or
migrating a VM to a host which is already network saturated. This might have some performance impact on the application if it happens to be network sensitive.
DRS is network-aware in vSphere 6.5, so it now considers the network utilization of host and network usage requirements of VMs during initial placement and load balancing. This makes DRS load balancing and initial placement of VMs more effective.

How It Works

During initial placement and load balancing, DRS first comes up with the list of best possible hosts to run a VM based on compute resources and then uses some heuristics to decide the final host based on VM and host network utilization’s. This makes sure the VM gets the network resources it needs along with the compute resources.

The goal of network-aware DRS in vSphere 6.5 is only to make sure the host has sufficient network resources available along with compute resources required by the VM. So, unlike regular DRS, which balances the CPU and memory load, network-aware DRS does not balance the network load in the cluster, which means it will not trigger a vMotion when there is network load imbalance.

Download

Download a full DRS PERFORMANCE in VMware vSphere 6.5 Study Guide.

Rating: 5/5


Jun 14

VMware vCenter Server 6.0 Performance and Best Practices

Introduction

VMware vCenter Server™ 6.0 substantially improves performance over previous vCenter Server versions. This paper demonstrates the improved performance in vCenter Server 6.0 compared to vCenter Server 5.5, and shows that vCenter Server with the embedded vPostgres database now performs as well as vCenter Server with an external database, even at vCenter Server’s scale limits. This paper also discusses factors that affect vCenter Server performance and provides best practices for vCenter Server performance.

What’s New in vCenter Server 6.0

vCenter Server 6.0 brings extensive improvements in performance and scalability over vCenter Server 5.5:

  • Operational throughput is over 100% higher, and certain operations are over 80% faster.
  • VMware vCenter Server™ Appliance™ now has the same scale limits as vCenter Server on Windows with an external database: 1,000 ESXi hosts, 10,000 powered-on virtual machines, and 15,000 registered virtual machines.
  • VMware vSphere® Web Client performance has improved, with certain pages over 90% faster.

In addition, vCenter Server 6.0 provides new deployment options:

  • Both vCenter Server on Windows and VMware vCenter Server Appliance provide an embedded vPostgres database as an alternative to an external database. (vPostgres replaces the SQL Server Express option that was available in previous vCenter versions.)
  • The embedded vPostgres database supports vCenter’s full scale limits when used with the vCenter Server Appliance.

Performance Comparison with vCenter Server 5.5

In order to demonstrate and quantify performance improvements in vCenter Server 6.0, this section compares 6.0 and 5.5 performance at several inventory and workload sizes. In addition, this section compares vCenter Server 6.0 on Windows to the vCenter Server Appliance at different inventory sizes, to highlight the larger scale limits in the Appliance in vCenter 6.0. Finally, this section illustrates the performance gained by provisioning vCenter with additional resources.

The workload for this comparison uses vSphere Web Services API clients to simulate a self-service cloud environment with a large amount of virtual machine “churn” (that is, frequently creating, deleting, and reconfiguring virtual machines). Each client repeatedly issues a series of inventory management and provisioning operations to vCenter Server. Table 1 lists the operations performed in this workload. The operations listed here were chosen from a sampling of representative customer data. Also, the inventories in this experiment used vCenter features including DRS, High Availability, and vSphere Distributed Switch. (See Appendix A for precise details on inventory configuration.)

Operations performed in performance comparison workload

Results

Figure 3 shows vCenter Server operation throughput (in operations per minute) for the heaviest workload for each inventory size. Performance has improved considerably at all sizes. For example, for the large inventory setup (Figure 3, right), operational throughput has increased from just over 600 operations per minute in vCenter Server 5.5 to over 1,200 operations per minute in vCenter Server 6.0 for Windows: an improvement of over 100%.
The other inventory sizes show similar gains in operational throughput.

vCenter Server 6.0 operation throughput

Figure 3. vCenter throughput at several inventory sizes, with heavy workload (higher is better). Throughput has increased at all inventory sizes in vCenter Server 6.0.

Figure 4 shows median latency across all operations in the heaviest workload for each inventory size. Just as with operational throughput in Figure 3, latency has improved at all inventory sizes. For example, for the large inventory setup (Figure 4, right), median operational latency has decreased from 19.4 seconds in vCenter Server 5.5 to 4.0 seconds in vCenter Server Appliance 6.0: a decrease of about 80%. The other inventory sizes also show large decreases in operational latency.

vCenter Server median latency at several inventory sizes

Figure 4. vCenter Server median latency at several inventory sizes, with heavy workload (lower is better). Latency has decreased at all inventory sizes in vCenter 6.0.

Download

Download a full VMware vCenter Server 6.0 Performance and Best Practices Technical White Paper

Rating: 5/5


Jun 11

Oracle Databases on VMware Best Practices Guide

Introduction

This Oracle Databases on VMware Best Practices Guide provides best practice guidelines for deploying Oracle databases on VMware vSphere®. The recommendations in this guide are not specific to any particular set of hardware, or size and scope of any particular Oracle database implementation. The examples and considerations provide guidance, but do not represent strict design requirements.

The successful deployment of Oracle on vSphere 5.x/6.0 is not significantly different from deploying Oracle on physical servers. DBAs can fully leverage their current skill set while also delivering the benefits associated with virtualization.

In addition to this guide, VMware has created separate best practice documents for storage, networking, and performance.

This document also includes information from two white papers, Performance Best Practice for VMware vSphere 5.5 and Performance Best Practices for VMware vSphere 6.0

VMware Support for Oracle Databases on vSphere

Oracle has a support statement for VMware products (MyOracleSupport 249212.1). While there has been much public discussion about Oracle’s perceived position on support for VMware virtualization, experience shows that Oracle Support upholds its commitment to customers, including those using VMware virtualization in conjunction with Oracle products.

VMware is also an Oracle customer. The E-Business Suite and Siebel implementations of VMware IT are virtualized. VMware routinely submits and receives assistance with issues for Oracle running on VMware virtual infrastructure. The MyOracleSupport (MetaLink) Document ID 249212.1 provides the specifics of Oracle’s support commitment to VMware. Gartner, IDC, and others also have documents available to their subscribers that specifically address this policy.

VMware Oracle Support Process

VMware support will accept tickets for any Oracle-related issue reported by a customer and will help drive the issue to resolution. To augment Oracle’s support document, VMware also has a total ownership policy for customers with Oracle issues as described in the letter at VMware® Oracle Support Affirmation.

By being accountable, VMware Support will drive the issue to resolution regardless of which vendor (VMware, Oracle or other) is responsible for the resolution. In most cases, reported issues can be resolved through configuration changes, bug fixes, or feature enhancements by one of the involved vendors. VMware is committed to its customer’s success and supports their choice to run Oracle software in modern, virtualized environments. For further information, see https://www.vmware.com/support/policies/oracle-support

VMware vSphere Oracle Support Process

Figure 1 – VMware vSphere Oracle Support Process

Download

Download a full Oracle Databases on VMware Best Practices Guide

Rating: 5/5


Jun 11

NSX-v 6.2.x – Security Hardening Guide

Created by RobertoMari on Oct 12,2014 5:22 PM. Last modified by vwade on Jun 10, 2016 2:40 PM

VMware NSX Hardening Guide Authors: Pravin Goyal, Greg Christopher, Michael Haines, Roberto Mari, Kausum Kumar, Wade Holmes

This is the Version 1.6 of the VMware® NSX for vSphere Hardening Guide.

This guide provides prescriptive guidance for customers on how to deploy and operate VMware® NSX in a secure manner.

Acknowledgements to the following contributors for reviewing and providing feedback to various sections of the document: Kausum Kumar, Roberto Mari, Scott Lowe, Ben Lin, Bob Motanagh, Dmitri Kalintsev, Greg Frascadore, Hadar Freehling, Kiran Kumar Thota, Pierre Ernst, Rob Randell, Roie Ben Haim, Yves Fauser

Guide is provided in an easy to consume spreadsheet format, with rich metadata (i.e. similar to existing VMware vSphere Hardening Guides) to allow for guideline classification and risk assessment.

Feedback and Comments to the Authors and the NSX Solution Team can be posted as comments to this community Post (Note: users must login on vmware communities before posting a comment).

Download

Download a full NSX-v Security Hardering Guide

Rating: 5/5