Jul 14

vSphere Web Client after the Client Integration Plug-In Removal

vSphere 6.5 release no longer requires that you install the Client Integration Plug-In. In certain cases, workflows have changed slightly, this video covers those changes.

Rating: 5/5


May 30

Whats new vSphere 6.5 vCenter Server High Availability

This video covers Whats new vSphere 6.5 High Availability.

Rating: 5/5


May 30

What’s New in vSphere 6.5 Migration

This video covers what’s new in vSphere 6.5 migrating from a windows vCenter server to the vCenter Server Appliance 6.5.

Rating: 5/5


May 30

Introduction to the vSphere Client 6.5

This video is an introduction to some new features in the vSphere Client 6.5.

Rating: 5/5


May 30

What’s New in vSphere 6.5 vCenter Server Appliance 6.5 File-Based Backup and Restore

This video covers What’s New in vSphere 6.5 vCenter Server Appliance 6.5 File-Based Backup and Restore.

Rating: 5/5


May 15

vSphere 6.5 Upgrade Considerations Part-1

Emad Younis posted May 15, 2017.
The release of vSphere 6.5 in November 2016 introduced many new features and enhancements. These include the vCenter Server Appliance (VCSA) now becoming the default deployment. vCenter Server native high availability, which protects vCenter Server from application failure. Built-in File-Based backup and restore allows customers the ability to backup their vCenter Server from the VAMI or by API. The VCSA restore can simply be done by mounting the original ISO used to deploy the VCSA and selecting the restore option. These features and more are exclusive only to the vCenter Server Appliance. The new HTML5 vSphere Client is making its first official product debut with vSphere 6.5.

Did someone say security? We now have better visibility of vSphere changes with actionable logging. VM Encryption allows the encryption of a virtual machine, including disks and snapshots. Secure Boot for ESXi ensures that only digitally signed code runs on the hypervisor. Secure Boot for VM’s is as simple as checking a box. We’ve only begun to scratch the surface of all the new vSphere 6.5 features.

Packet-Header

vCenter-6.5-Features

Product Education

As you start preparing for your vSphere 6.5 upgrade, a checklist will be the run book used to ensure its success. The upgrade process can be divided into three phases:

Phase 1: Pre-upgrade – all the upfront work that should be done before starting an upgrade.
Phase 2: Upgrade – mapping the steps of each component that will be upgraded.
Phase 3: Post-upgrade – validation to ensure everything went according to plan.

The first part of any successful upgrade is determining the benefits of the new features and the value add they will provide to your business. Next is getting familiar with these new features and how they will be implemented in your environment. The following list will get you started learning each of the new vSphere 6.5 features and their benefits.

Another consideration to getting familiar with the new features and upgrade process is the hands on approach in a lab environment. If you have a lab environment at your disposal, try building it as close to your production environment as possible to simulate both the upgrade process and new feature implementation. If a lab environment is not available, there are options like VMware’s Workstation or Fusion if you have the resources to run them. Last, but not least, there is also the Hands on Labs that do not require any resources and provide a guided approach. No matter which option you select, the key is getting familiar and comfortable with the upgrade process.

Health Assessment

crn-products-of-the-year-2016-400.jpg Doing a health assessment of your current environment is critical. Nothing is worse than being in the middle of an upgrade and having to spending hours troubleshooting an issue only to find out it was related to a misconfiguration with something as simple as DNS or NTP. Another advantage to doing a health assessment is discovering wasted resources. For example, virtual machines that are no longer needed but have yet to be decommissioned. The health assessment should cover all components (Compute, Storage, Network, 3rd party) that interact with your vSphere environment. Please consult with your compute, storage, and network vendors for health assessment best practices and tools. Environmental issues are high on the list when it comes to upgrade show stoppers. The good news is that they can be prevented.

There are also VMware and community tools that can help by providing reports on your current environment. Most of these tools come with a 60-day evaluation period, which is enough time to get the information needed. When using community tools please keep in mind they are not officially supported by VMware. Finally, there is also the VMware vSphere health check done by a certified member of VMware’s professional services team. Check with your VMware representative for more information.

Conducting the health assessment could lead to discovering an issue that requires the help of support and opening a ticket. Do not proceed with the upgrade until all open support tickets have been resolved. There are instances where an issue can be fixed by applying a patch or an update, but make sure that any environmental problems have completely been resolved prior to proceeding. This not only includes VMware support tickets, but also compute, storage, network, and 3rd party that interact with your vSphere environment.

Important Documents

Now that we’ve learned about the features and completed a health assessment of our current vSphere environment, it’s time to start mapping out the upgrade process. The first step is looking at important documents like the vSphere 6.5 documentation, product release notes, knowledge base articles, and guides. Each of these documents have pieces of information which are vital to ensuring a successful upgrade. Product release notes, for example, provide information such as what’s new but also information about upgrades, any known issues, and all key pieces of information. Reading the vSphere 6.5 upgrade guide will give you an understanding of the upgrade process. The VMware compatibility guide and Product interoperability matrices will ensure components and upgrade paths are supported. Here is a breakdown of the important vSphere 6.5 documentation that should be viewed prior to upgrading.

Packet-Header

vSphere 6.5Documents

Product Release Notes

Knowledge Base Articles

Guides

Documentation

Upgrades need to be done with a holistic view from the hardware layer all the way to the application layer. With this philosophy in mind, a successful upgrade requires advance prep work to be done to avoid any potential roadblocks. Things like health assessments shouldn’t only be done when preparing for an upgrade, but also routinely. Think of it as a doctor’s visit for your environment and getting a clean bill of health. vSphere 6.5 has been released now for six months and since then four patches are now available providing bug fixes and product updates. The HTML5 vSphere Client now has added features in the release of vSphere 6.5.0 patch b and vSAN easy install is available in 6.5.0 patch d. This agile release of patches means customers no longer need to wait on the first update to consider upgrading to vSphere 6.5. The next few blog posts in this series will cover mapping out the upgrade process whiteboard style, architecture considerations for the vSphere Single Sign-On domain, migration, and upgrade paths.

At this point it is worth noting that the vSphere upgrade process can seem complex if not overwhelming, especially for our customers who use other tools that depend on vSphere and vCenter Server. We hear you. VMware is certainly working to make this better. I hope to be able to write about those improvements in the future. Until then you have upgrade homework to do!

About the Author

Emad Younis is a Staff Technical Marketing Architect and VCIX 6.5-DCV working in the Cloud Platform Business Unit, part of the R&D organization at VMware. He currently focuses on the vCenter Server Appliance, vCenter Server Migrations, and VMware Cloud on AWS. His responsibilities include generating content, evangelism, collecting product feedback, and presenting at events. Emad can be found blogging on emadyounis.com or on Twitter via @emad_younis.

Rating: 5/5


Mar 11

VMware vSphere Encrypted vMotion Architecture, Performance and Best Practices

Executive Summary

With the rise in popularity of hybrid cloud computing, where VM-sensitive data leaves the traditional IT environment and traverses over the public networks, IT administrators and architects need a simple and secure way to protect critical VM data that traverses across clouds and over long distances.

The Encrypted vMotion feature available in VMware vSphere® 6.5 addresses this challenge by introducing a software approach that provides end-to-end encryption for vMotion network traffic. The feature encrypts all the vMotion data inside the vmkernel by using the most widely used AES-GCM encryption standards, and thereby provides data confidentiality, integrity, and authenticity even if vMotion traffic traverses untrusted network links.

Experiments conducted in the VMware performance labs using industry-standard workloads show the following:

  • vSphere 6.5 Encrypted vMotion performs nearly the same as regular, unencrypted vMotion.
  • The CPU cost of encrypting vMotion traffic is very moderate, thanks to the performance optimizations added to the vSphere 6.5 vMotion code path.
  • vSphere 6.5 Encrypted vMotion provides the proven reliability and performance guarantees of regular, unencrypted vMotion, even across long.

Introduction

VMware vSphere® vMotion® [1] provides the ability to migrate a running virtual machine from one vSphere host to another, with no perceivable impact to the virtual machine’s performance. vMotion brings enormous benefits to administrators—it reduces server downtime and facilitates automatic load-balancing.

During migration, the entire memory and disk state associated with a VM, along with its metadata, are transferred over the vMotion network. It is possible during VM migration for an attacker with sufficient network privileges to compromise a VM by modifying its memory contents during the transit to subvert the VM’s applications or its guest operating system. Due to this possible security risk, VMware highly recommended administrators use an isolated or secured network for vMotion traffic, separate from other datacenter networks such as the management network or provisioning network. This protected the VM’s sensitive data as it traversed over a secure network.

Even though this recommended approach adds slightly higher network and administrative complexity, it works well in a traditional IT environment where the customer owns the complete network infrastructure and can secure it. In a hybrid cloud, however, workloads move dynamically between clouds and datacenters over secured and unsecured network links. Therefore, it is essential to secure sensitive vMotion traffic at the network endpoints. This protects critical VM data even as the vMotion traffic leaves the traditional IT environment and traverses over the public networks.

vSphere 6.5 introduces Encrypted vMotion, which provides end-to-end encryption of vMotion traffic and protects VM data from eavesdropping occurrences on untrusted network links. Encrypted vMotion provides complete confidentiality, integrity, and authenticity of the data transferred over a vMotion network without any requirement for dedicated networks or additional hardware.

The sections that follow describe:

  • vSphere 6.5 Encrypted vMotion technology and architecture
  • How to configure Encrypted vMotion from the vSphere Client
  • Performance implications of encrypting vMotion traffic using real-life workload scenarios
  • Best practices for deployment

Encrypted vMotion Architecture

vMotion uses TCP as the transport protocol for migrating the VM data. To secure VM migration, vSphere 6.5 encrypts all the vMotion traffic, including the TCP payload and vMotion metadata, using the most widely used AES-GCM encryption standard algorithms, provided by the FIPS-certified vmkernel vmkcrypto module.
Workflow Encrypted vMotion

Encryption Protocol

Encrypted vMotion does not rely on the Secure Sockets Layer (SSL) or Internet Protocol Security (IPsec) technologies for securing vMotion traffic. Instead, it implements a custom encrypted protocol above the TCP layer. This is done primarily for performance, but also for reasons explained below.
SSL is compute intensive and completely implemented in user space, while vMotion, which constitutes core ESXi, executes in kernel space. This means, if vMotion were to rely on SSL, each encryption/decryption call would need to traverse across kernel and user spaces, thereby resulting in excessive performance overhead. Using the encryption algorithms provided by the vmkernel vmkcrypto module enables vMotion to avoid such a performance penalty.

Although IPSec can be used to secure vMotion traffic, its usability is limited in the vSphere environment because ESXi hosts support IPsec only for IPv6 traffic, but not for IPv4 traffic. Besides, implementing a custom protocol above the TCP layer gives vMotion the ability to create the appropriate number of vMotion worker threads, and coordinate efficiently among them to spread the encryption/decryption CPU load across multiple cores.

Download VMware vSphere Encrypted vMotion Architecture, Performance and Best Practices.

Rating: 5/5


Mar 11

DRS Performance in VMware vSphere 6.5

Introduction

VMware vSphere® Distributed Resource Scheduler™ (DRS) is more than a decade old and is constantly innovating with every new version. In vSphere 6.5, DRS comes with many new features and performance improvements to ensure more efficient load balancing and VM placement, faster response times, and simplified cluster management.
In this paper, we cover some of the key features and performance improvements to highlight the more efficient, faster, and lighter DRS in vSphere 6.5.

New Features

Predictive DRS
Historically, vSphere DRS has been reactive—it reacts to any changes in VM workloads and migrates the VMs to distribute load across different hosts. In vSphere 6.5, with VMware vCenter Server® working together with VMware vRealize® Operations™ (vROps), DRS can act upon predicted future changes in workloads. This helps DRS migrate VMs proactively and makes room in the cluster to accommodate future workload demand.
For example, if your VMs’ workload is going to spike at 9am every day, predictive DRS will be able to detect this pattern before-hand based on historical data from vROPs, and can prepare the cluster resources by using either of the following techniques:

  • Migrating the VMs to different hosts to accommodate the future workload and avoid host over-commitment
  • Bringing back a new host from stand-by mode using VMware vSphere® Distributed Power Management™ (DPM) to accommodate the future demand

How It Works

To enable predictive DRS, you need to link vCenter Server to a vROps instance (that supports predictive DRS), which monitors the resource usage pattern of VMs and generates predictions. Once vROps starts monitoring VM workloads, it generates predictions after a specified learning period. The generated predictions are then provided to vCenter Server for DRS to consume.

Once the VMs’ workload predictions are available, DRS evaluates the demand of a VM based on its current resource usage and predicted future resource usage.

    Demand of a VM = Max (current usage, predicted future usage)

Considering the maximum of current and future resource usage ensures that DRS does not clip any VM’s current demand in favor of its future demand. For the VMs which do not have predictions, DRS computes resource
demand based on only the current resource usage.

Look Ahead Interval

The predictions that DRS gets from vROps are always for a certain period of time, starting from the current
time. This period is known as the “look ahead interval” for predictive DRS. This is by default 60 minutes starting from the current time, which means, by default the predictions will always be for the next one hour. So if there is any sudden spike that is going to happen in the next one hour, predictive DRS will detect it and will prepare the cluster to handle it.

Network-Aware DRS

Traditionally, DRS has always considered the compute resource (CPU and memory) utilizations of hosts and VMs for balancing load across hosts and placing VMs during power-on. This generally works well because in many cases, CPU and memory are the most important resources needed for good application performance.
However, since network availability is not considered in this approach, sometimes this results in placing or
migrating a VM to a host which is already network saturated. This might have some performance impact on the application if it happens to be network sensitive.
DRS is network-aware in vSphere 6.5, so it now considers the network utilization of host and network usage requirements of VMs during initial placement and load balancing. This makes DRS load balancing and initial placement of VMs more effective.

How It Works

During initial placement and load balancing, DRS first comes up with the list of best possible hosts to run a VM based on compute resources and then uses some heuristics to decide the final host based on VM and host network utilization’s. This makes sure the VM gets the network resources it needs along with the compute resources.

The goal of network-aware DRS in vSphere 6.5 is only to make sure the host has sufficient network resources available along with compute resources required by the VM. So, unlike regular DRS, which balances the CPU and memory load, network-aware DRS does not balance the network load in the cluster, which means it will not trigger a vMotion when there is network load imbalance.

Download

Download a full DRS PERFORMANCE in VMware vSphere 6.5 Study Guide.

Rating: 5/5


Jan 30

vCenter Server Appliance 6.5 Migration Walkthrough

Emad Younis posted January 30, 2017

vCenter Server migrations have typically taken massive planning, a lot of effort, and time. The new Migration Tool included in the vCenter Server Appliance (VCSA) 6.5 is a game changer. No longer requiring scripts and many long nights of moving hosts one cluster at a time. The Migration Tool does all the heavy lifting. Copying the configuration and inventory of source vCenter Server by default. The migration workflow includes upgrading from either a Windows vCenter Server 5.5 or 6.0 to VCSA 6.5. A new guided migration walkthrough is available on the VMware Feature Walkthroughl site. This click-by-click guide covers an embedded migration from a Windows vCenter 6.0 to a VCSA 6.5.

Migration Assistant

The first step of the migration workflow requires running the Migration Assistant (MA). The Migration Assistant serves two purposes. The first is running pre-checks on the source Windows vCenter Server. The Migration Assistant displays warnings of installed extensions and provides a resolution for each. It will also show the source and the destination deployment types. Keep in mind changing a deployment type is not allowed during the migration workflow. More information on deployment type considerations prior to a migration can be found here. The MA also displays some information about the source Windows vCenter Server. These included: FQDN, SSO User, SSL Thumbprint, Port, and MA log folder. At the bottom of the MA is the Migration Steps, which will be available until the source Windows vCenter Server is shutdown. This is a helpful guide of the migration steps that need to be completed. The second purpose of the MA is copying the source Windows vCenter Server data. By default, the configuration and inventory data of the Windows vCenter Server is migrated. The option to copy historical and performance data is also available. During the migration workflow, no changes are made to the source Windows vCenter Server. This allows for an easy rollback plan. Do not close the Migration Assistant at any point during the migration workflow. Closing the MA will result in starting the entire migration process over. If everything is successful there will be a prompt at the bottom of the Migration Assistant to start the migration.

Migration Tool

Step two of the migration workflow is starting the wizard driven Migration Tool. This requires the vCenter Server Appliance 6.5 Installer. Since the identity of the source Windows vCenter Server is preserved, the migration tool needs to run on a separate Windows Server from the source. Like the VCSA 6.5 Deployment, Migration is also a two stage deployment. The Migration Tool will first deploy a new vCenter Server Appliance. The new VCSA will have a temporary IP address while the source Windows vCenter data is copied. The second stage configures the VCSA 6.5 and imports the source Windows vCenter Server data. This includes the identity of the source Windows vCenter server. The vCenter Server identity includes FQDN, IP address, UUID, Certificates, MoRef IDs, etc. As far as other solutions that communicate with vCenter Server nothing has changed. During the migration workflow, no changes are made to the source Windows vCenter Server. This allows for an easy rollback plan. Other solutions may require an upgrade, consult the VMware and any third party interoperability matrixes. Once the migration workflow is completed, login to the vSphere Client and validate your environment.
Migration-Tool

Walkthroughs

vCenter Server 6.0 Embedded Migration to Appliance Walkthrough is available here. This guide will show how to migrate a Windows vCenter Server and the Platform Services Controller 6.0 components on a single virtual machine to a vCenter Server Appliance 6.5. Another feature walkthrough for external migration including vSphere Update Manager (VUM) will be available soon. In the mean time go through the embedded migration and provide any feedback in the comments section below. Also feel free to reach out to me on Twitter @emad_younis.

About the Author

Emad Younis is a Staff Technical Marketing Architect and VCIX 6.5-DCV working in the Cloud Platform Business Unit, part of the R&D organization at VMware. He currently focuses on the vCenter Server Appliance, vCenter Server Migrations, and VMware Cloud on AWS. His responsibilities include generating content, evangelism, collecting product feedback, and presenting at events. Emad can be found blogging on emadyounis.com or on Twitter via @emad_younis.


Dec 14

Configuration Maximum changes from vSphere 6.0 to vSphere 6.5

vSphere 6.5 is now available and with every release VMware makes changes to the configuration maximums for vSphere. Since VMware never highlights what has changed between releases in their official Configuration Maximum 6.5 documentation and compare the document with the vSphere 6.0 Configuration Maximums. The changes between the versions are here.

Configuration Sphere 6.5 vSphere 6.0

Virtual Machines Maximums

RAM per VM 6128GB 4080GB
Virtual NVMe adapters per VM 4 N/A
Virtual NVMe targets per virtual SCSI adapter 15 N/A
Virtual NVMe targets per VM 60 N/A
Virtual RDMA Adapters per VM 1 N/A
Video memory per VM 2GB 512MB

ESXi Host Maximums

Logical CPUs per host 576 480
RAM per host 12TB 6TB *some exceptions
LUNs per server 512 256
Number of total paths on a server 2048 1024
FC LUNs per host 512 256
LUN ID 0 to 16383 0 to 1023
VMFS Volumes per host 512 256
FT virtual machines per cluster 128 98

vCenter Maximum

Hosts per vCenter Server 2000 1000
Powered-on VMs per vCenter Server 25000 10000
Registered VMs per vCenter Server 35000 15000
Number of host per datacenter 2000 500
Maximum mixed vSphere Client (HTML5) + vSphere Web
Client simultaneous connections per VC
60 (30 Flex, 30 maximum HTML5) N/A
Maximum supported inventory for vSphere Client
(HTML5)
10,000 VMs, 1,000 Hosts N/A
Host Profile Datastores 256 120
Host Profile Created 500 1200
Host Profile Attached 500 1000

Platform Services Controller Maximums

Maximum PSCs per vSphere Domain 10 8

vCenter Server Extensions Maximums

[VUM] VMware Tools upgrade per ESXi host 30 24
[VUM] Virtual machine hardware upgrade per host 30 24
[VUM] VMware Tools scan per VUM server 200 90
[VUM] VMware Tools upgrade per VUM server 200 75
[VUM] Virtual machine hardware scan per VUM server 200 90
[VUM] Virtual machine hardware upgrade per VUM server 200 75
[VUM] ESXi host scan per VUM server 232 75
[VUM] ESXi host patch remediation per VUM server 232 71
[VUM] ESXi host upgrade per VUM server 232 71

Virtual SAN Maximums

Virtual machines per cluster 6000 6400
Number of iSCSI LUNs per Cluster 1024 N/A
Number of iSCSI Targets per Cluster 128 N/A
Number of iSCSI LUNs per Target 256 N/A
Max iSCSI LUN size 62TB N/A
Number of iSCSI sessions per Node 1024 N/A
iSCSI IO queue depth per Node 4096 N/A
Number of outstanding writes per iSCSI LUN 128 N/A
Number of outstanding IOs per iSCSI LUN 256 N/A
Number of initiators who register PR key for a iSCSI LUN 64 N/A

Storage Policy Maximums

Maximum number of VM storage policies 1024 Not Published
Maximum number of VASA providers 1024 Not Published
Maximum number of rule sets in VM storage
policy
16 N/A
Maximum capabilities in VM storage policy
rule set
64 N/A
Maximum vSphere tags in virtual machine storage policy 128 Not Published

Download

Download a full VMware vSphere 6.5 Configuration Maximums.
Download a full VMware vSphere 6.0 Configuration Maximums.

Rating: 5/5