Sep 29

Understanding the Impacts of Mixed-Version vCenter Server Deployments

Adam Eckerle posted September 29, 2017.

There are a few questions that come up quite often regarding vCenter Server upgrades and mixed-versions that we would like to address. In this blog post we will discuss and attempt to clarify the guidance in the vSphere Documentation for Upgrade or Migration Order and Mixed-Version Transitional Behavior for Multiple vCenter Server Instance Deployments. This doc breaks down what happens during the vCenter Server upgrade process and describes the impacts of having components – vCenter Server and Platform Services Controller (PSC), running at different versions during the upgrade window. For example, once you get some vCenter Server instances upgraded, say to 6.5 Update 1, you won’t be able to manage those upgraded instances from any 5.5 instances. While most of the functionality limitations manifest themselves when upgrading from 5.5 to 6.x, there could also be some quirks in environments running a mix of 6.0 and 6.5. There are a couple of additional questions that seem to arise from this doc so let’s see if we can address them.

The Upgrade Process

I’m not going to go through the entire process here, but it is important to understand the basics of how a vCenter Server upgrade works. Remember that there are two components to vCenter Server – the Platform Services Controller (PSC) which runs the vSphere (SSO) Domain and vCenter Server itself. For a vCenter Server upgrade, the vSphere Domain and all PSCs within it, must be upgraded first. Once that is complete, then the vCenter Servers can be upgraded. Obviously, if you have a standalone vCenter Server with an embedded PSC, this is a much simpler proposition. But, for those requiring external PSCs because of other requirements such as Enhanced Linked Mode, just remember the PSCs need to be upgraded first.

H5

Mixed-Version-Upgrade-Phases-1 Configuration


The other important point to make here is that upgrading by site is not supported. Looking at the above example, you can see there are two sites each with an external PSC and a vCenter Server. It is a common that a customer would like to upgrade an entire site, test, and then move onto the next site. Unfortunately, this is not supported and all PSCs within the vSphere Domain across all sites must be upgraded first.

Mixed-Version Support

Now, on to the questions mentioned earlier. The first question is, “Can I run vCenter Servers and Platform Services Controllers (PSCs) of different versions in my vSphere Domain?” The answer here is yes, but only during the time of an upgrade. VMware does not support running different versions of these components under normal operations within a vSphere Domain. The exact verbiage from the article is, “Mixed-version environments are not supported for production. Use these environments only during the period when an environment is in transition between vCenter Server versions.” So, do not plan on running different versions of vCenter Server and PSC in production on an ongoing basis.

The second question is then, “How long can I run in this mixed-version mode?” This question is a bit tougher to answer. There is no magic date or time bomb when things will just stop working. This is really more of a question of understanding the risks and knowing how problems may affect the environment should something go wrong while in this mixed-version state.

The Risks

An example of one such risk would be if you were upgrading to vSphere 6.5 from 5.5. Let’s say you had your vSphere Domain (i.e. PSCs) and one vCenter Server already upgraded leaving you with 1 or more vCenter Server 5.5 instances. Imagine that something happens leaving a vCenter Server 5.5 completely wiped out. You could restore that vCenter Server 5.5 instance and be back in production as long as you have a good, current backup. If the backup you need to restore from was taken prior to the start of the vSphere Domain upgrade, you would not be able to use it to restore. The reason for this is that the vCenter Server instance that you would be restoring is expecting a 5.5 vSphere Domain and the communication between that restored vCenter Server instance and the 6.5 PSC would not work. An alternative to this would be to rollback the entire vSphere Domain and any other vCenter Servers that were upgraded.

Another risk would be if we are unable to restore that instance because the backups were bad (it does happen) or you couldn’t accept the outcome of losing the data since that backup was taken. The result here is that you would be forced to rebuild that vCenter Server instance and re-attach all the hosts. This may not be desirable because this new vCenter Server instance would have a new UUID and all of the hosts, VMs, and other objects would also have new moref IDs. This means that any backup tools or monitoring software would see these as all net new objects and you would lose continuity of backups or monitoring. You also would have to rebuild the vCenter Server instance as 6.5 which also may not be desirable because you may have an application or other constraint that requires a specific version of vCenter Server. If you rebuild the instance as 6.5 you may break that application.

H5

Mixed-Version-Upgrade-with-Failure


Finally, let’s consider the possibility of having a PSC failure instead of losing a vCenter Server. What happens? Normally, you could easily repoint a vCenter Server instance to another external PSC within the same SSO Site. However, this would not be possible if the vCenter Server is not running the same version as the PSC you are attempting to repoint to. For example, if you had a vCenter Server 5.5 or 6.0 and they were pointing to a 6.5 PSC (because it has already been upgraded), if that PSC failed you would not be able to repoint that vCenter Server to another PSC. Remember that all PSCs must be upgraded first so all PSCs should be running 6.5 already. The only way to recover from this scenario is to restore or redeploy the failed PSC which may take longer than repointing.

Recommendations

So, give the above scenarios, what do we tell a customer who asks, “My upgrade plan spans multiple sites over multiple months. How should I plan my upgrade?” Here are our recommendations:

    1. Minimize the upgrade window
    2. Follow the upgrade documentation
    3. Take full backups before, during, and after the upgrade
    4. Check the interop matrices and test the upgrade first

The first recommendation is to minimize the upgrade window as much as possible. We understand that there’s only so much you can do here, but it is important to reduce the amount of time you’ll be running different versions of vCenter Server (and PSC) in the same vSphere Domain. The second recommendation is to, no matter how tempting to do otherwise, upgrade the entire vSphere Domain (SSO Instances and PSCs) first as is called out in the vSphere Documentation. It is not supported to upgrade everything in one site and then move onto the next. You must upgrade all SSO Instances and PSCs in the vSphere Domain, across ALL sites and locations, first. Third, make sure you have good backups every step of the way. While snapshots can be a path to a quick rollback, when dealing with SSO, PSCs, and vCenter Server they don’t always work. Taking a full backup ensures the ability to restore to a known clean state. Last, and certainly not least, do your interoperability testing and test the upgrade in a lab environment that represents your production environment as much as possible.

Emad has a great 3-part series on upgrades (Part 1, Part 2, Part 3) so be sure to check it out prior to testing and beginning your upgrade. Also know and understand the risks and impacts of problems during the upgrade process. Finally, know how the upgrade process is going to affect all of the yet-to-be-upgraded parts of your environment and have good rollback and mitigation plans if any issues come up.

About the Author

Adam Eckerle manages the vSphere Technical Marketing team in the Cloud Platform Business Unit at VMware. This team is responsible for vSphere launch, enablement, and ongoing content generation for the VMware field, Partners, and Customers. In addition, Adam’s team is also focused on preparing Customers and Partners for vSphere upgrades through workshops, VMUGs, and other events.

Rating: 5/5


Jan 30

vCenter Server Appliance 6.5 Migration Walkthrough

Emad Younis posted January 30, 2017

vCenter Server migrations have typically taken massive planning, a lot of effort, and time. The new Migration Tool included in the vCenter Server Appliance (VCSA) 6.5 is a game changer. No longer requiring scripts and many long nights of moving hosts one cluster at a time. The Migration Tool does all the heavy lifting. Copying the configuration and inventory of source vCenter Server by default. The migration workflow includes upgrading from either a Windows vCenter Server 5.5 or 6.0 to VCSA 6.5. A new guided migration walkthrough is available on the VMware Feature Walkthroughl site. This click-by-click guide covers an embedded migration from a Windows vCenter 6.0 to a VCSA 6.5.

Migration Assistant

The first step of the migration workflow requires running the Migration Assistant (MA). The Migration Assistant serves two purposes. The first is running pre-checks on the source Windows vCenter Server. The Migration Assistant displays warnings of installed extensions and provides a resolution for each. It will also show the source and the destination deployment types. Keep in mind changing a deployment type is not allowed during the migration workflow. More information on deployment type considerations prior to a migration can be found here. The MA also displays some information about the source Windows vCenter Server. These included: FQDN, SSO User, SSL Thumbprint, Port, and MA log folder. At the bottom of the MA is the Migration Steps, which will be available until the source Windows vCenter Server is shutdown. This is a helpful guide of the migration steps that need to be completed. The second purpose of the MA is copying the source Windows vCenter Server data. By default, the configuration and inventory data of the Windows vCenter Server is migrated. The option to copy historical and performance data is also available. During the migration workflow, no changes are made to the source Windows vCenter Server. This allows for an easy rollback plan. Do not close the Migration Assistant at any point during the migration workflow. Closing the MA will result in starting the entire migration process over. If everything is successful there will be a prompt at the bottom of the Migration Assistant to start the migration.

Migration Tool

Step two of the migration workflow is starting the wizard driven Migration Tool. This requires the vCenter Server Appliance 6.5 Installer. Since the identity of the source Windows vCenter Server is preserved, the migration tool needs to run on a separate Windows Server from the source. Like the VCSA 6.5 Deployment, Migration is also a two stage deployment. The Migration Tool will first deploy a new vCenter Server Appliance. The new VCSA will have a temporary IP address while the source Windows vCenter data is copied. The second stage configures the VCSA 6.5 and imports the source Windows vCenter Server data. This includes the identity of the source Windows vCenter server. The vCenter Server identity includes FQDN, IP address, UUID, Certificates, MoRef IDs, etc. As far as other solutions that communicate with vCenter Server nothing has changed. During the migration workflow, no changes are made to the source Windows vCenter Server. This allows for an easy rollback plan. Other solutions may require an upgrade, consult the VMware and any third party interoperability matrixes. Once the migration workflow is completed, login to the vSphere Client and validate your environment.
Migration-Tool

Walkthroughs

vCenter Server 6.0 Embedded Migration to Appliance Walkthrough is available here. This guide will show how to migrate a Windows vCenter Server and the Platform Services Controller 6.0 components on a single virtual machine to a vCenter Server Appliance 6.5. Another feature walkthrough for external migration including vSphere Update Manager (VUM) will be available soon. In the mean time go through the embedded migration and provide any feedback in the comments section below. Also feel free to reach out to me on Twitter @emad_younis.

About the Author

Emad Younis is a Staff Technical Marketing Architect and VCIX 6.5-DCV working in the Cloud Platform Business Unit, part of the R&D organization at VMware. He currently focuses on the vCenter Server Appliance, vCenter Server Migrations, and VMware Cloud on AWS. His responsibilities include generating content, evangelism, collecting product feedback, and presenting at events. Emad can be found blogging on emadyounis.com or on Twitter via @emad_younis.


Nov 21

How to backup and restore the embedded vCenter Server 6.0 vPostgres database

Posted on October 18, 2016 by Ramesh B S

This video demonstrates how to backup and restore an embedded vCenter Server 6.0 vPostgres database. Backing up your database protects the data stored in your database. Of course, restoring a backup is an essential part of that function.

This follows up on our recent blog & video: How to backup and restore the embedded vCenter Server Appliance 6.0 vPostgres database.

Note: This video is only supported for backup and restore of the vPostgres database to the same vCenter Server. Use of image-based backup and restore is the only solution supported for performing a full, secondary appliance restore.

Rating: 5/5


Oct 18

What’s New in vSphere 6.5: vCenter Server

Posted on October 18, 2016 by Charu Chaubal
Today VMware announced vSphere 6.5, which is one of the most feature rich releases of vSphere in quite some time. The vCenter Server Appliance is taking charge in this release with several new features which we’ll cover in this blog article. For starters, the installer has gotten an overhaul with a new modern look and feel. Users of both Linux and Mac will also be ecstatic since the installer is now supported on those platforms along with Microsoft Windows. If that wasn’t enough, the vCenter Server Appliance now has features that are exclusive such as:

  • Migration
  • Improved Appliance Management
  • VMware Update Manager
  • Native High Availability
  • Built-in Backup / Restore

We’ll also cover general improvements to vCenter Server 6.5 including the vSphere Web Client and the .

Migration

vCenter Server Appliance Migration

vCenter Server Appliance Migration


Getting to the vCenter Server Appliance is no longer an issue as the installer has a built in Migration Tool. This Migration Tool has several improvements over the recently released vSphere 6.0 Update 2m release. Now, Windows vCenter Server 5.5 and 6.0 are supported. If you’re currently running a Windows vCenter Server 6.0, this is your chance to get to the vCenter Server Appliance using this Migration Tool. In vSphere 6.5 there is an improvement in the migration tool which allows for more granular selection of migrated data as follows:

  • Configuration
  • Configuration, events, and tasks
  • Configuration, events, tasks, and performance metrics

VMware Update Manager (VUM) is now part of the vCenter Server Appliance. This will be huge for customers who have been waiting to migrate to the vCenter Server Appliance without managing a separate Windows server for VUM. If you’ve already migrated to the vCenter Server Appliance 6.0 the upgrade process will migrate your VUM baselines and updates to the vCenter Server Appliance 6.5. During the migration process the vCenter configuration, inventory, and alarm data is migrated by default.

Improved Appliance Management

Another exclusive feature of the vCenter Server Appliance 6.5 is the improved appliance management capabilities. The vCenter Server Appliance Management Interface continues its evolution and exposes additional health and configurations. This simple user interface now shows Network and Database statistics, disk space, and health in addition to CPU and memory statistics which reduces the reliance on using a command line interface for simple monitoring and operational tasks.

vCenter Server Appliance Management

vCenter Server Appliance Management

vCenter Server High Availability

vCenter Server 6.5 has a new native high availability solution that is available exclusively for the vCenter Server Appliance. This solution consists of Active, Passive, and Witness nodes which are cloned from the existing vCenter Server. Failover within the vCenter HA cluster can occur when an entire node is lost (host failure for example) or when certain key services fail. For the initial release of vCenter HA an RTO of about 5 minutes is expected but may vary slightly depending on load, size, and capabilities of the underlying hardware.

vCenter Server High Availability

vCenter Server High Availability

Backup and Restore

New in vCenter Server 6.5 is built-in backup and restore for the vCenter Server Appliance. This new out-of-the-box functionality enables customers to backup vCenter Server and Platform Services Controller appliances directly from the VAMI or API, and also backs up both VUM and Auto Deploy running embedded with the appliance. The backup consists of a set of files that will be streamed to a storage device of the customer’s choosing using SCP, HTTP(s), or FTP(s) protocols. This backup fully supports vCenter Server Appliances with embedded and external Platform Services Controllers. The Restore workflow is launched from the same ISO from which the vCenter Server Appliance (or PSC) was originally deployed or upgraded.

vSphere Web Client

From a User Interface perspective, probably the most used UI is the vSphere Web Client. This interface continues to be based on the Adobe Flex platform and requires Adobe Flash to use. However, VMware has continued to identify areas for improvement that will help improve the user experience until it is retired. Through several outreach efforts over the past year we’ve identified some high-value areas where we think customers are looking most for improvements. This small list of high-impact improvements will help with the overall user experience with the vSphere Web Client while development continues with the HTML5-based vSphere Client:

  • Inventory tree is the default view
  • Home screen reorganized
  • Renamed “Manage” tab to “Configure”
  • Removed “Related Objects” tab
  • Performance improvements (VM Rollup at 5000 instead of 50 VMs)
  • Live refresh for power states, tasks, and more!
vCenter Server Web Client

vCenter Server Web Client

vSphere Client

With vSphere 6.5 I’m excited to say that we have a fully supported version of the HTML5-based vSphere Client that will run alongside the vSphere Web Client. The vSphere Client is built right into vCenter Server 6.5 (both Windows and Appliance) and is enabled by default. While the vSphere Client doesn’t yet have full feature parity the team have prioritized many of the day to day tasks of administrators and continue to seek feedback on what’s missing that will enable customers to use it full time. The vSphere Web Client will continue to be accessible via “http:///vsphere-client” while the vSphere Client will be reachable via “http:///ui”. VMware will also be periodically updating the vSphere Client outside of the normal vCenter Server release cycle. To make sure it is easy and simple for customers to stay up to date the vSphere Client will be able to be updated without any effects to the rest of vCenter Server.

Now let’s take a look at some of the benefits to the new vSphere Client:

  • Clean, consistent UI built on VMware’s new Clarity UI standards (to be adopted across our portfolio)
  • Built on HTML5 so it is truly a cross-browser and cross-platform application
  • No browser plugins to install/manage
  • Integrated into vCenter Server for 6.5 and fully supported
  • Fully supports Enhanced Linked Mode
  • Users of the Fling have been extremely positive about its performance
vSphere Client

vSphere Client

Conclusion

While we’ve covered quite a few features there are many more which will be covered in accompanying blog articles. We will also be following up with detailed blogs on several of these new features which will be available by the time vSphere 6.5 reaches General Availability.

We hope you are as excited about this release as we are! Please post questions in the comments or reach out to Emad (@Emad_Younis) or Adam (@eck79) via Twitter.

To learn more about vSphere 6.5, please see the following resources.

Rating: 5/5


Jun 14

VMware vCenter Server 6.0 Performance and Best Practices

Introduction

VMware vCenter Server™ 6.0 substantially improves performance over previous vCenter Server versions. This paper demonstrates the improved performance in vCenter Server 6.0 compared to vCenter Server 5.5, and shows that vCenter Server with the embedded vPostgres database now performs as well as vCenter Server with an external database, even at vCenter Server’s scale limits. This paper also discusses factors that affect vCenter Server performance and provides best practices for vCenter Server performance.

What’s New in vCenter Server 6.0

vCenter Server 6.0 brings extensive improvements in performance and scalability over vCenter Server 5.5:

  • Operational throughput is over 100% higher, and certain operations are over 80% faster.
  • VMware vCenter Server™ Appliance™ now has the same scale limits as vCenter Server on Windows with an external database: 1,000 ESXi hosts, 10,000 powered-on virtual machines, and 15,000 registered virtual machines.
  • VMware vSphere® Web Client performance has improved, with certain pages over 90% faster.

In addition, vCenter Server 6.0 provides new deployment options:

  • Both vCenter Server on Windows and VMware vCenter Server Appliance provide an embedded vPostgres database as an alternative to an external database. (vPostgres replaces the SQL Server Express option that was available in previous vCenter versions.)
  • The embedded vPostgres database supports vCenter’s full scale limits when used with the vCenter Server Appliance.

Performance Comparison with vCenter Server 5.5

In order to demonstrate and quantify performance improvements in vCenter Server 6.0, this section compares 6.0 and 5.5 performance at several inventory and workload sizes. In addition, this section compares vCenter Server 6.0 on Windows to the vCenter Server Appliance at different inventory sizes, to highlight the larger scale limits in the Appliance in vCenter 6.0. Finally, this section illustrates the performance gained by provisioning vCenter with additional resources.

The workload for this comparison uses vSphere Web Services API clients to simulate a self-service cloud environment with a large amount of virtual machine “churn” (that is, frequently creating, deleting, and reconfiguring virtual machines). Each client repeatedly issues a series of inventory management and provisioning operations to vCenter Server. Table 1 lists the operations performed in this workload. The operations listed here were chosen from a sampling of representative customer data. Also, the inventories in this experiment used vCenter features including DRS, High Availability, and vSphere Distributed Switch. (See Appendix A for precise details on inventory configuration.)

Operations performed in performance comparison workload

Results

Figure 3 shows vCenter Server operation throughput (in operations per minute) for the heaviest workload for each inventory size. Performance has improved considerably at all sizes. For example, for the large inventory setup (Figure 3, right), operational throughput has increased from just over 600 operations per minute in vCenter Server 5.5 to over 1,200 operations per minute in vCenter Server 6.0 for Windows: an improvement of over 100%.
The other inventory sizes show similar gains in operational throughput.

vCenter Server 6.0 operation throughput

Figure 3. vCenter throughput at several inventory sizes, with heavy workload (higher is better). Throughput has increased at all inventory sizes in vCenter Server 6.0.

Figure 4 shows median latency across all operations in the heaviest workload for each inventory size. Just as with operational throughput in Figure 3, latency has improved at all inventory sizes. For example, for the large inventory setup (Figure 4, right), median operational latency has decreased from 19.4 seconds in vCenter Server 5.5 to 4.0 seconds in vCenter Server Appliance 6.0: a decrease of about 80%. The other inventory sizes also show large decreases in operational latency.

vCenter Server median latency at several inventory sizes

Figure 4. vCenter Server median latency at several inventory sizes, with heavy workload (lower is better). Latency has decreased at all inventory sizes in vCenter 6.0.

Download

Download a full VMware vCenter Server 6.0 Performance and Best Practices Technical White Paper

Rating: 5/5


Jun 10

VMware vCenter Server™ 6.0 Deployment Guide

Introduction

The VMware vCenter Server™ 6.0 release introduces new, simplified deployment models. The components that make up a vCenter Server installation have been grouped into two types: embedded and external. Embedded refers to a deployment in which all components—this can but does not necessarily include the database—are installed on the same virtual machine. External refers to a deployment in which vCenter Server is installed on one virtual machine and the Platform Services Controller (PSC) is installed on another. The Platform Services Controller is new to vCenter Server 6.0 and comprises VMware vCenter™ Single Sign-On™, licensing, and the VMware Certificate Authority (VMCA).

Embedded installations are recommended for standalone environments in which there is only one vCenter Server system and replication to another Platform Services Controller is not required. If there is a need to replicate with other Platform Services Controllers or there is more than one vCenter Single Sign-On enabled solution, deploying the Platform Services Controller(s) on separate virtual machine(s)—via external deployment—from vCenter Server is required.

This paper defines the services installed as part of each deployment model, recommended deployment models (reference architectures), installation and upgrade instructions for each reference architecture, postdeployment steps, and certificate management in VMware vSphere 6.0.

VMware vCenter Server 6.0 Services

vCenter Server and Platform Services Controller Services

Figure 1 – vCenter Server and Platform Services Controller Services

Requirements

General
A few requirements are common to both installing vCenter Server on Microsoft Windows and deploying VMware vCenter Server Appliance™. Ensure that all of these prerequisites are in place before proceeding with a new installation or an upgrade.

  • DNS – Ensure that resolution is working for all system names via fully qualified domain name (FQDN), short name (host name), and IP address (reverse lookup).
  • Time – Ensure that time is synchronized across the environment.
  • Passwords – vCenter Single Sign-On passwords must contain only ASCII characters; non-ASCII and extended (or high) ASCII characters are not supported.

Windows Installation

Installing vCenter Server 6.0 on a Windows Server requires a Windows 2008 SP2 or higher 64-bit operating system (OS). Two options are presented: Use the local system account or use a Windows domain account. With a Windows domain account, ensure that it is a member of the local computer’s administrator group and that it has been delegated the “Log on as a service” right and the “Act as part of the operating system” right. This option is not available when installing an external Platform Services Controller.

Windows installations can use either a supported external database or a local PostgreSQL database that is installed with vCenter Server and is limited to 20 hosts and 200 virtual machines. Supported external databases include Microsoft SQL Server 2008 R2, SQL Server 2012, SQL Server 2014, Oracle Database 11g, and Oracle Database 12c. When upgrading to vCenter Server 6.0, if SQL Server Express was used in the previous installation, it will be replaced with PostgreSQL. External databases require a 64-bit DSN. DSN aliases are not supported.

When upgrading vCenter Server to vCenter Server 6.0, only versions 5.0 and later are supported. If the vCenter Server system being upgraded is not version 5.0 or later, such an upgrade is required first.

Table 2 outlines minimum hardware requirements per deployment environment type and size when using an external database. If VMware vSphere Update Manager™ is installed on the same server, add 125GB of disk space and 4GB of RAM.

Minimum Hardware Requirements – Windows Installation

Table 2. Minimum Hardware Requirements – Windows Installation

Download

Download a full VMware vCenter Server™ 6.0 Deployment Guide

Rating: 5/5


Feb 09

What’s New in vSphere & vCenter 6.0

New Features in vSphere 6.0

Following the hands on labs module HOL-SDC-1410:
whats new in vSphere & vCenter 6.png

Scalability – Configuration Maximums


The Configuration Maximums have increased across the board for vSphere Hosts in 6.0. Each vSphere Host can now support:

    • 480 Physical CPUs per Host
    • Up to 12TB of Physical Memory
    • 1000 VMs per Host
    • 64 Hosts per Cluster

Scalability – Virtual Hardware v11

This release of vSphere gives us Virtual Hardware v11. Some of the highlights include:

    • 128 vCPUs
    • 4 TB RAM
    • Hot-add RAM now vNUMA aware
    • WDDM 1.1 GDI acceleration features
    • xHCI 1.0 controller compatible with OS X 10.8+ xHCI driver
    • A virtual machine can now have a maximum of 32 serial ports
    • Serial and parallel ports can now be removed

Local ESXi Account and Password Management Enhancements


In the latest release of vSphere 6.0, we expand support for account management on ESXi Hosts.

New ESXCLI Commands:

    • CLI interface for managing ESXi local user accounts and permissions
    • Coarse grained permission management
    • ESXCLI can be invoked against vCenter instead of directly accessing the ESXi host.
    • Previously, the account and permission management functionality for ESXi hosts was available only with direct host connections.

Password Complexity

    • Previously customers had to manually edit by hand the file /etc/pam.d/passwd, now they can do it from VIM API OptionManager.updateValues().
    • Advanced options can also be accessed through vCenter, so there is not need to make a direct host connection.
    • PowerCLI cmdlet allows setting host advanced configuration options

Account Lockout:

    • Security.AccountLockFailures – “Maximum allowed failed login attempts before locking out a user’s account. Zero disables account locking.”
    • Default: 10 tries
    • Security.AccountUnlockTime – “Duration in seconds to lock out a user’s account after exceeding the maximum allowed failed login attempts.”
    • Default: 2 minutes

vCenter Server 6.0 – Platform Services Controller:


vCenter server 6  platform services controller.png
The Platform Services Controller (PSC) includes common services that are used across the suite.
• These include SSO, Licensing and the VMware Certificate Authority (VMCA)
• The PSC is the first piece that is either installed or upgraded. When upgrading a SSO instance becomes a PSC.
• There are two models of deployment, embedded and centralized.

    o Embedded means the PSC and vCenter Server are installed on a single virtual machine. – Embedded is recommended for sites with a single SSO solution such as a single vCenter.
    o Centralized means the PSC and vCenter Server are installed on different virtual machines. – Centralized is recommended for sites with two or more SSO solutions such as multiple vCenter Servers, vRealize Automation, etc. When deploying in the centralized model it is recommended to make the PSC highly available as to not have a single point of failure, in addition to utilizing vSphere HA a load balancer can be placed in front of two or more PSC’s to create a highly available PSC architecture.

The PSC and vCenter servers can be mixed and matched, meaning you can deploy Appliance PSC’s along with Windows PSC’s with Windows and Appliance based vCenter Servers. Any combination uses the PSC’s built in replication.

What’s New in vSphere 6.0 – Networking and Security

Networking in vSphere 6.0 has received some significant improvements which has led to the following new vMotion capabilities:

    • Cross vSwitch vMotion
    • Cross vCenter vMotion
    • Long Distance vMotion
    • vMotion across Layer 3 boundaries

More detail on each of these follows as well as details on the improved Network I/O Control (NIOC) version 3.

Cross vSwitch vMotion

cross vSwitch vMotion.png
Cross vSwitch vMotion allows you to seamlessly migrate a VM across different virtual switches while performing a vMotion.

    • No longer restricted by the network you created on the vSwitches in order to vMotion a virtual machine.
    • Requires the source and destination portgroups to share the same L2. The IP address within the VM will not change.
    • vMotion will work across a mix of switches (standard and distributed). Previously, you could only vMotion from vSS to vSS or within a single vDS. This limitation has been removed.

The following Cross vSwitch vMotion migrations are possible:

    • vSS to vSS
    • vSS to vDS
    • vDS to vDS
    • vDS to VSS is not allowed

Another added feature is that vDS to vDS migration transfers the vDS metadata to the destination vDS (network statistics).

Cross vCenter vMotion

cross vCenter vMotion.png
Expanding on the Cross vSwitch vMotion enhancement, we are also excited to announce support for Cross vCenter vMotion.
vMotion can now perform the following changes simultaneously.

    • Change compute (vMotion) – Performs the migration of virtual machines across compute hosts
    • Change storage (Storage vMotion) – Performs the migration of the virtual machine disks across datastores
    • Change network (Cross vSwitch vMotion) – Performs the migration of a VM across different virtual switches and finally…
    • Change vCenter (Cross vCenter vMotion) – Performs the migration of the vCenter which manages the VM.

All of these types of vMotion are seamless to the guest OS. Like with vSwitch vMotion, Cross vCenter vMotion requires L2 network connectiviy since the IP of the VM will not be changed. This functionality builds upon Enhanced vMotion and shared storage is not required. Target support for local (single site), metro (multiple well-connected sites), and cross-continental sites.

Long Distance vMotion

long distance vMotion.png
Long Distance vMotion is an extension of Cross vCenter vMotion however targeted for environments where vCenter servers are spread across large geographic distances and where the latency across sites is 100ms or less. Although spread across a long distance, all the standard vMotion guarantees are honored.
This does not require VVOLs to work. A VMFS/NFS system will work also.
Use Cases:

    Requirements:
    • The requirements for Long Distance vMotion are the same as Cross vCenter vMotion, except with the addition of the maximum latency between the source and destination sites must be 100 ms or less, and there is 250 Mbps of available bandwidth.
    • To stress the point: The VM network will need to be a stretched L2 because the IP of the guest OS will not change. If the destination portgroup is not in the same L2 domain as the source, you will lose network connectivity to the guest OS. This means in some topologies, such as metro or cross-continental, you will need a stretched L2 technology in place. The stretched L2 technologies are not specified. Any technology that can present the L2 network to the vSphere hosts will work, because it’s unknown to ESX how the physical network is configured. Some examples of technologies that would work are VXLAN, NSX L2 Gateway Services, or GIF/GRE tunnels.
    • There is no defined maximum distance that will be supported as long as the network meets these requirements. Your mileage may vary, but are eventually constrained by the laws of physics.
    • The vMotion network can now be configured to operate over an L3 connection. More details on this are in the next slide.

    Network I/O Control v3

    network IO control V3.png
    Network I/O Control Version 3 allows administrators or service providers to reserve or guarantee bandwidth to a vNIC in a virtual machine or at a higher level the Distributed Port Group.
    This ensures that other virtual machines or tenants in a multi-tenancy environment don’t impact the SLA of other virtual machines or tenants sharing the same upstream links.
    Use Cases:
    • Allows private or public cloud administrators to guarantee bandwidth to business units or tenants. –> This is done at the VDS port group level.
    • Allows vSphere administrators to guarantee bandwidth to mission critical virtual machines. –> This is done at the VMNIC level.

    What’s New in vSphere 6.0 Storage & Availability

    vSphere 6 storage availability.png
    At a high level, these are the new Storage & Availability features of vSphere 6.0.
    You will find more details on some of the features below.

    VMware Virtual Volumes

    vmware virtual volumes.png
    VVOLS changes the way storage is architected and consumed. Using external arrays without VVOLS, typically the LUN is the unit of both capacity and policy. In other words, you create LUNs with fixed capacity and fixed data services. Then, VMs are assigned to LUNs based on their data service needs. This can result in problems when a LUN with a certain data service runs out of capacity, while other LUNs still have plenty of room to spare. The effect of this is that typically admins overprovision their storage arrays, just to be on the safe side.
    With VVOLS, it is totally different. Each VM is assigned its own storage policy, and all VMs use storage from the same common pool. Storage architects need only provision for the total capacity of all VMs, without worrying about different buckets with different policies. Moreover, the policy of a VM can be changed, and this doesn’t require that it be moved to a different LUN.

    VVOLS – VASA Provider

    vvol vasa provider.png
    The VASA Provider is the component that exposes the storage services which a VVOLS array can provide. It also understands VASA APIs for operations such as the creation of virtual volume files. It can be thought of as the “control plane” element of VVOLS. A VASA provider can be implemented in the firmware of an array, or it can be in a separate VM that runs on the cluster which is accessing the VVOLS storage (e.g., as a part of the array’s management server virtual appliance)

    VVOLS – Storage Container (SC)

    vvol storage container.png

    A storage container is a logical construct for grouping Virtual Volumes. It is set up by the storage admin, and the capacity of the container can be defined. As mentioned before, VVOLS allows you to separate capacity management from policy management. Containers provide the ability to isolate or partition storage according to whatever need or requirement you may have. If you don’t want to have any partitioning, you could simply have one storage container for the entire array. The maximum number of containers depends upon the particular array model.

    VVOLS – Storage Policy-Based Management

    vvol storage policy-based management.png
    Instead of being based on static, per-LUN assignment, storage policies with VVOLS are managed through the Storage Policy-Based Management framework of vSphere. This framework uses the VASA APIs to query the storage array about what data services it offers, and then exposes them to vSphere as capabilities. These capabilities can then be grouped together into rules and rulesets, which are then assigned to VMs when they get deployed. When configuring the array, the storage admin can choose which capabilities to expose or not expose to vSphere.
    To get more detailed information on VVOLS consider taking HOL-SDC-1429 – Virtual Volumes (VVOLS) Setup and Enablement.

    vSphere 6.0 Fault Tolerance

    vsphere 6.0 FT.png
    The benefits of Fault Tolerance are:

    • Protect mission critical, high performance applications regardless of OS
    • Continuous availability – Zero downtime, zero data loss for infrastructure failures
    • Fully automated response

The new version of Fault Tolerance greatly expands the use cases for FT to approximately 90% of workloads with these new features:

    • Enhanced virtual disk support – Now supports any disk format (thin, thick or EZT)
    • Now supports hot configure of FT – No longer required to turn off VM to enable FT
    • Greatly increased FT host compatibility – If you can vMotion a VM between hosts you can use FT

The new technology used by FT is called Fast Checkpointing and is basically a heavily modified version of an xvMotion (cross-vCenter vMotion) that never ends and executes many more checkpoints (multiple/sec).
FT logging (traffic between hosts where primary and secondary are running) is very bandwidth intensive and will use a dedicated 10G nic on each host. This isn’t required, but highly recommended as at a minimum an FT protected VM will use more. If FT doesn’t get the bandwidth it needs the impact is that the protected VM will run slower.

vSphere FT 6.0 New Capabilities

vSphere FT 6.0 new capability.png
DRS is supported for initial placement of VMs only.

Backing Up FT VMs

backing up FT vms.png
FT VMs can now be backed up using standard backup software, the same as all other VMs (FT VMs could always be backed up using agents). They are backed up using snapshots through VADP.
Snapshots are not user-configurable – users can’t take snapshots. It is only supported as part of VADP

Availability – vSphere Replication

availability - vSphere replication.png
The features on this slide are new in vSphere Replication (VR) 6.0

    • Compression can be enabled when configuring replication for a VM. It is disabled by default.
    • Updates are compressed at source (vSphere host) and stay compressed until written to storage. This does cost some CPU cycles on source host (compress) and target storage host (decompress).
    • Uses FastLZ compression libraries. Fast LZ provides a nice balance between performance, compression, and limited overhead (CPU).
    • Typical compression ratio is 1.7 to 1
    Best results when using vSphere 6.0 at source and target along with vSphere Replication (VR) 6.0 appliance(s). Other configurations supported – example: Source is vSphere 6.0, target is vSphere 5.5. vSphere Replication Server (VRS) must decompress packets internally (costing VR appliance CPU cycles) before writing to storage.
    • With VR 6.0, VR traffic can be isolated from other vSphere host traffic.
    • At source, a NIC can be specified for VR traffic. NIOC can be used to control replication bandwidth utilization.
    • At target, VR appliances can have multiple vmnics with separate IP addresses to separate incoming replication traffic, management traffic, and NFC traffic to target host(s).
    • At target, NIC can be specified for incoming NFC traffic that will be written to storage.
    • The user must, of course, set up the appropriate network configuration (vSwitches, VLANs, etc.) to separate traffic into isolated, controllable flows.

VMware Tools in vSphere 2015 includes a “freeze/thaw” mechanism for quiescing certain Linux distributions at the file system level for improved recovery reliability. See vSphere documentation for specifics on supported Linux distributions.

Download

Learn More

For information on upgrading to vSphere 6.0, visit the vSphere Upgrade Center

• vSphere is available standalone or as a part of vSphere with Operations Management or vCloud Suite. For more information, visit vSphere with Operations Management or vCloud Suite.

Rating: 5/5


May 23

Location of vCenter Server log files (KB: 1021804)

Purpose

This article provides the default location of the vCenter Server logs.

Resolution

The vCenter Server logs are placed in a different directory on disk depending on vCenter Server version and the deployed platform:

  • vCenter Server 5.x and earlier versions on Windows XP, 2000, 2003: %ALLUSERSPROFILE%\Application Data\VMware\VMware VirtualCenter\Logs\
  • vCenter Server 5.x and earlier versions on Windows Vista, 7, 2008: C:\ProgramData\VMware\VMware VirtualCenter\Logs\
  • vCenter Server 5.x Linux Virtual Appliance: /var/log/vmware/vpx/
  • vCenter Server 5.x Linux Virtual Appliance UI: /var/log/vmware/vami

    Note: If the service is running under a specific user, the logs may be located in the profile directory of that user instead of %ALLUSERSPROFILE%.

vCenter Server logs are grouped by component and purpose:

  • vpxd.log: The main vCenter Server logs, consisting of all vSphere Client and WebServices connections, internal tasks and events, and communication with the vCenter Server Agent (vpxa) on managed ESX/ESXi hosts.
  • vpxd-profiler.log, profiler.log and scoreboard.log: Profiled metrics for operations performed in vCenter Server. Used by the VPX Operational Dashboard (VOD) accessible at https://VCHostnameOrIPAddress/vod/index.html.
  • vpxd-alert.log: Non-fatal information logged about the vpxd process.
  • cim-diag.log and vws.log: Common Information Model monitoring information, including communication between vCenter Server and managed hosts’ CIM interface.
  • drmdump\: Actions proposed and taken by VMware Distributed Resource Scheduler (DRS), grouped by the DRS-enabled cluster managed by vCenter Server. These logs are compressed.
  • ls.log: Health reports for the Licensing Services extension, connectivity logs to vCenter Server.
  • vimtool.log: Dump of string used during the installation of vCenter Server with hashed information for DNS, username and output for JDBC creation.
  • stats.log: Provides information about the historical performance data collection from the ESXi/ESX hosts
  • sms.log: Health reports for the Storage Monitoring Service extension, connectivity logs to vCenter Server, the vCenter Server database and the xDB for vCenter Inventory Service.
  • eam.log: Health reports for the ESX Agent Monitor extension, connectivity logs to vCenter Server.
  • catalina.<date>.log and localhost.<date>.log: Connectivity information and status of the VMware Webmanagement Services.
  • jointool.log: Health status of the VMwareVCMSDS service and individual ADAM database objects, internal tasks and events, and replication logs between linked-mode vCenter Servers.
  • Additional log files:
    • manager.<date>.log
    • host-manager.<date>.log
Note: As each log grows, it is rotated over a series of numbered component-nnn.log files. On some platforms, the rotated logs are compressed.

vCenter Server logs can be viewed from:

  • The vSphere Client connected to vCenter Server 4.0 and higher – Click Home > Administration > System Logs.
  • The Virtual Infrastructure Client connected to VirtualCenter Server 2.5 – Click Administration > System Logs.
  • From the vSphere 5.1 and 5.5 Web Client – Click Home > Log Browser, then from the Log Browser, click Select object now, choose an ESXi host or vCenter Server object, and click OK.