This video shows how to use the VMware vSphere web client to configure resource pools within a DRS cluster and how to add virtual machines to the resource pools using vSOM.
In this video you will learn about the available load balanccing algorithms in vSphere 6.0.
Senior Staff Engineer Peter Shepherd discusses privileges, roles, and permissions, and demonstrates how to create a virtual machine administrator role in the vSphere Web Client.
VMware vSphere® virtual machine encryption (VM encryption) is a feature introduced in vSphere 6.5 to enable the encryption of virtual machines. VM encryption provides security to VMDK data by encrypting I/Os from a virtual machine (which has the VM encryption feature enabled) before it gets stored in the VMDK. In this paper, we quantify the impact of using VM encryption on a VM’s I/O performance as well as on some of the VM provisioning operations like VM clone, power-on, and snapshot creation. We show that while VM encryption can lead to bottlenecks in I/O throughput and latency for ultra-high-performance devices (like a high-end NVMe drive) that can support hundreds of thousands of IOPS, for most regular types of storage, like enterprise class SSD or VMware vSAN™, the impact on I/O performance is very minimal.
VM encryption supports the encryption of virtual machine files, virtual disk files, and core dump files. Some of the files associated with a virtual machine like log files, VM configuration files, and virtual disk descriptor files are not encrypted. This is because they mostly contain non-sensitive data and operations like disk management should be supported whether or not the underlying disk files are secured. VM encryption uses vSphere APIs for I/O filtering (VAIO), henceforth referred to as IOFilter.
IOFilter is an ESXi framework that allows the interception of VM I/Os in the virtual SCSI emulation (VSCSI) layer. On a high level, the VSCSI layer can be thought of as the layer in ESXi just below the VM and above the VMFS file system. The IOFilter framework enables developers, both VMware and third party vendors, to write filters to implement more services using VM I/Os like encryption, caching, and replication. This framework is implemented entirely in user space. This allows the VM I/Os to be isolated cleanly from the core architecture of ESXi, thereby eliminating any potential issues to the core functionality of the hypervisor. In case of any failure, only the VM in question would be affected. There can be multiple filters enabled for a particular VM or a VMDK, and these filters are typically chained in a manner shown below, so that I/Os are processed by each of these filters serially, one after the other, and then finally either passed down to VMFS or completed within one of the filters. This is illustrated in Figure 1.
VM Encryption Overview
The primary purpose of VM encryption is to secure the data in VMDKs, such that when the VMDK data is accessed by any unauthorized entity, it gets only meaningless data. The VM that legitimately owns the VMDK has the necessary key to decrypt the data whenever read and then fed to the guest operating system. This is done using industry-standard encryption algorithms to secure this traffic with minimal overhead.
While VM encryption does not impose any new hardware requirements, using a processor that supports the AES-NI instruction set would speed up the encryption/decryption operation. In order to quantify the performance expectations on a traditional server without an AES-NI enabled processor, the results in this paper are from slightly older servers that do not support the AES-NI instruction set.
Figure 2 shows the various components involved as part of the VM encryption mechanism. It consists of an external key management server (KMS), the vCenter Server system, and an ESXi host or hosts. vCenter Server requests keys from an external KMS, which generates and stores the keys and passes them down to vCenter Server for distribution. An important aspect to note is that there is no “per-block hashing” for the virtual disk.
This means, VM encryption provides data protection against snooping and not against data corruption since there is no hash for detecting corruption and recovering from it. For more security, the encryption takes into account not only the encryption key, but also the block’s address. This means two blocks of a VMDK with the same content encrypt to different data.
To visualize the mechanism of encryption (and decryption), we need to look at how the various elements in the security policy are laid out topologically. The KMS is the central server in this security-enabled landscape. Figure 3 shows a simplified topology.
The KMS is a secure centralized repository of cryptographic keys. There can be more than one KMS configured with a vCenter Server. However, they need to be configured such that only KMSs that replicate keys between themselves (usually from the same vendor) should be added to the same KMS cluster. Otherwise each KMS should be added under a different KMS cluster. One of the KMS clusters must be designated as the default in vCenter Server. Only Key Management Interoperability Protocol (KMIP) v1.1 compliant KMSs are supported and vCenter Server is the client of KMS. Using KMIP enables vCenter Server to talk to any KMIP compliant KMS vendor. Before transacting with the KMS, vCenter Server must establish a trust connection with it, which needs to be done manually.
Download a full VMware vSphere Virtual Machine Encryption Performance vSphere 6.5 Guide.
These releases continue to accelerate digital transformation for organizations through the most critical IT use cases – Security, Automation, and Application Continuity – while expanding support for new application frameworks and architectures.
As more and more customers adopt NSX for vSphere, we continue to add features to make it easier for you to deploy, operate and scale-out your environment. NSX empowers customers on their cloud journey. It is driving value inside the data center today and expanding across datacenters and to the cloud via our Cloud Air Network partnerships, and soon to VMware Cloud on AWS and native public cloud workloads via VMware Cross-Cloud Services.
Let’s take a look at some of the new features in NSX for vSphere 6.3:
Some of the new capabilities delivered in NSX for vSphere 6.3 are the Application Rule Manager (available in NSX Advanced and Enterprise editions) and Endpoint Monitoring (available in NSX Enterprise Edition).
Application Rule Manager simplifies the way you create security groups and firewall rules for applications based on their real-time network traffic flows. Endpoint Monitoring enables you to profile applications inside the guest including visibility into specific application processes and their associated network connections. Used together, you have end-to-end visibility of your applications and simplified firewall rule creation to help operationalize micro-segmentation even faster and more effectively than ever before.
Keep an eye out on the Security section of the NSX blog over the next few weeks for technical deep-dives into exactly how these Application Rule Manager and Endpoint Monitoring features work.
Our product certifications team was busy in 2016 and intends to deliver additional certifications throughout 2017. They have been working hard on guiding our development efforts and ensuring a number of key security and compliance enhancements made their way into the NSX for vSphere 6.3 release. In 2016, Coalfire, an independent cyber risk management advisor and assessor, certified that VMware NSX for vSphere meets regulatory compliance requirements such as PCI DSS. NSX was also the first software-defined networking solution to have the Defense Information Systems Agency (DISA) Risk Management Executive publish a Security Technical Implementation Guide (STIG), signifying that the solution meets the security hardening guidance required for installment on Department of Defense (DoD) networks. Watch the blog Security section in the coming months for updates on certifications related to ICSA Labs, FIPS 140-2 and Common Criteria EAL-2 certification.
When I meet with customers, they continue to tell me that NSX has the most transformative impact on their organizations, once they begin automating their manual networking and security processes. It’s not easy and requires organizational, people, and processes changes. But the value NSX brings to the organization is huge. To help support this, we continue to make enhancements to the automation capabilities in NSX for vSphere 6.3. We have enhanced the integration of NSX Load Balancers within vRealize Automation and added support for third-party IP Address Management (IPAM) systems for on-demand routed networks. We have also enhanced the integration with NSX for vSphere and vCloud Director, enabling new multi-tenant capabilities for our vCloud Air Network partners, and adding support for emerging NFV use case.
Figure. Screenshot of Load Balancing integration into vRealize Automation blueprints.
Multi-tenancy is often thought about as something only service providers care about, but we’re seeing increased demand from non-service providers looking to operate in more of a service provider model in the way they deliver services to their organization. The University of New Mexico is a great example of this, where they are collapsing their disaggregated IT from dozens of departments back to a centralized IT model, reducing provisioning time for new workloads and services from 3 weeks down to 20 minutes!
As NSX continues to mature and adoption becomes mainstream, we are seeing customers deploy NSX for a range of different use cases. AeroData Inc., for example, is leveraging the network overlay capabilities in NSX to create a highly-available, Active-Active data center architecture. In NSX for vSphere 6.3, we have further enhanced the security tagging capabilities in multi-vCenter deployments, simplifying security policy management at scale across multiple data centers. (Read more about multi-site with cross-vCenter NSX.)
Emerging use-cases: Containers and Remote Office Branch Office (ROBO)
With NSX for vSphere 6.3, we are helping to further improve the developer experience with containers via integration with the recently announced vSphere Integrated Container (VIC). As VIC is built on vSphere 6.5, you can leverage NSX for vSphere 6.3 to connect and secure VIC infrastructure, enabling you to deliver a secure container environment on demand for developers.
Another addition as part of NSX for vSphere 6.3 release is a new NSX for ROBO edition SKU. Using this capability, NSX provides a comprehensive solution to network and security policy for environments across remote and branch offices, which reduces the operational costs of branch connectivity and maintenance. In upcoming blog postings, we will share more details about the NSX for ROBO features, use case, and customer success stories as we have been seeing keen interest from our customers in this space.
Expanded support for new platforms with NSX-T: KVM, OpenStack
Let’s now look at VMware’s other NSX platform – NSX-T 1.1 – and some of the new capabilities being delivered in this latest release.
VMware NSX-T is focused on emerging application frameworks and architectures that have heterogeneous endpoints and technology stacks. In addition to vSphere hypervisors, these environments may also include other hypervisors, containers, bare metal, and public clouds. NSX-T allows IT and development teams to choose the technologies best suited for their particular applications. NSX-T is also designed for management, operations, and consumption by development organizations – in addition to IT.
NSX-T 1.1 offers expanded support for multiple KVM distributions, including Canonical Ubuntu and Red Hat Enterprise Linux. NSX-T starts at the source of the application, within the hypervisor kernel, delivering optimal security granularity and line-rate performance. NSX-T delivers distributed firewalling, logical switching, and distributed routing.
NSX-T 1.1 also delivers support for private IaaS clouds based on OpenStack. With this release, NSX-T supports the latest versions of OpenStack, i.e., Newton and Mitaka. In addition to using the OpenStack APIs, development teams can also use Puppet, Chef, and Terraform to describe and automate the networking and security for their application workloads within an OpenStack environment.
Support for new app frameworks: Photon and Container Networking Interface (CNI)
NSX-T is integrated with the VMware Photon Platform. This capability allows IT to offer virtual networking and security as services to developers building and running containerized, cloud-native applications. NSX will auto-create and scale networks and routers when a new namespace/project/organization is created, and define and enforce micro-segmentation security policies for containers and pods. (Read more about Photon Platform and NSX-T.)
Currently in beta, the NSX-T Container Networking Interface (CNI) plugin will allow developers to configure network connectivity for their application containers helping deliver developer ready infrastructure.
Pricing and Packaging
Though not a new NSX feature, we are also excited to announce changes to our VMware NSX pricing and packaging.
Starting today, customers who purchase VMware NSX have the option of downloading and installing either platform and can switch between the two if needed without having to re-purchase NSX. And should your needs change, you can switch between the two.
As mentioned earlier, with NSX for vSphere 6.3, we have introduced a new NSX for ROBO (Remote Office Branch Office) packaging option. For those of you familiar with the vSphere for ROBO and vSAN for ROBO offerings, NSX for ROBO is packaged in the same way.
In last week’s Q4 VMware earnings call, Pat Gelsinger mentioned that NSX is an essential element to VMware Cloud Foundation, Cross-Cloud Services and VMware Cloud on AWS. With both NSX for vSphere and NSX-T, NSX intends to be everywhere in the containerized, multi-cloud future. NSX becomes the bridge that enables customers to unify networking and security across their private and public clouds.
What You Can Do Now
- Get started with a Beginner or Advance NSX Hands-On-Lab (HOL)
- VMware product page, customer stories, and technical resources
- VMware NSX YouTube Channel, including 40+ Light Board videos!
- Contact your VMware sales representative for an overview and demonstration of NSX for vSphere or NSX-T
Matt De Vincentis
VMware vCenter Server™ 6.0 substantially improves performance over previous vCenter Server versions. This paper demonstrates the improved performance in vCenter Server 6.0 compared to vCenter Server 5.5, and shows that vCenter Server with the embedded vPostgres database now performs as well as vCenter Server with an external database, even at vCenter Server’s scale limits. This paper also discusses factors that affect vCenter Server performance and provides best practices for vCenter Server performance.
What’s New in vCenter Server 6.0
vCenter Server 6.0 brings extensive improvements in performance and scalability over vCenter Server 5.5:
- Operational throughput is over 100% higher, and certain operations are over 80% faster.
- VMware vCenter Server™ Appliance™ now has the same scale limits as vCenter Server on Windows with an external database: 1,000 ESXi hosts, 10,000 powered-on virtual machines, and 15,000 registered virtual machines.
- VMware vSphere® Web Client performance has improved, with certain pages over 90% faster.
In addition, vCenter Server 6.0 provides new deployment options:
- Both vCenter Server on Windows and VMware vCenter Server Appliance provide an embedded vPostgres database as an alternative to an external database. (vPostgres replaces the SQL Server Express option that was available in previous vCenter versions.)
- The embedded vPostgres database supports vCenter’s full scale limits when used with the vCenter Server Appliance.
Performance Comparison with vCenter Server 5.5
In order to demonstrate and quantify performance improvements in vCenter Server 6.0, this section compares 6.0 and 5.5 performance at several inventory and workload sizes. In addition, this section compares vCenter Server 6.0 on Windows to the vCenter Server Appliance at different inventory sizes, to highlight the larger scale limits in the Appliance in vCenter 6.0. Finally, this section illustrates the performance gained by provisioning vCenter with additional resources.
The workload for this comparison uses vSphere Web Services API clients to simulate a self-service cloud environment with a large amount of virtual machine “churn” (that is, frequently creating, deleting, and reconfiguring virtual machines). Each client repeatedly issues a series of inventory management and provisioning operations to vCenter Server. Table 1 lists the operations performed in this workload. The operations listed here were chosen from a sampling of representative customer data. Also, the inventories in this experiment used vCenter features including DRS, High Availability, and vSphere Distributed Switch. (See Appendix A for precise details on inventory configuration.)
Figure 3 shows vCenter Server operation throughput (in operations per minute) for the heaviest workload for each inventory size. Performance has improved considerably at all sizes. For example, for the large inventory setup (Figure 3, right), operational throughput has increased from just over 600 operations per minute in vCenter Server 5.5 to over 1,200 operations per minute in vCenter Server 6.0 for Windows: an improvement of over 100%.
The other inventory sizes show similar gains in operational throughput.
Figure 4 shows median latency across all operations in the heaviest workload for each inventory size. Just as with operational throughput in Figure 3, latency has improved at all inventory sizes. For example, for the large inventory setup (Figure 4, right), median operational latency has decreased from 19.4 seconds in vCenter Server 5.5 to 4.0 seconds in vCenter Server Appliance 6.0: a decrease of about 80%. The other inventory sizes also show large decreases in operational latency.
Download a full VMware vCenter Server 6.0 Performance and Best Practices Technical White Paper
The VMware vCenter Server™ 6.0 release introduces new, simplified deployment models. The components that make up a vCenter Server installation have been grouped into two types: embedded and external. Embedded refers to a deployment in which all components—this can but does not necessarily include the database—are installed on the same virtual machine. External refers to a deployment in which vCenter Server is installed on one virtual machine and the Platform Services Controller (PSC) is installed on another. The Platform Services Controller is new to vCenter Server 6.0 and comprises VMware vCenter™ Single Sign-On™, licensing, and the VMware Certificate Authority (VMCA).
Embedded installations are recommended for standalone environments in which there is only one vCenter Server system and replication to another Platform Services Controller is not required. If there is a need to replicate with other Platform Services Controllers or there is more than one vCenter Single Sign-On enabled solution, deploying the Platform Services Controller(s) on separate virtual machine(s)—via external deployment—from vCenter Server is required.
This paper defines the services installed as part of each deployment model, recommended deployment models (reference architectures), installation and upgrade instructions for each reference architecture, postdeployment steps, and certificate management in VMware vSphere 6.0.
VMware vCenter Server 6.0 Services
A few requirements are common to both installing vCenter Server on Microsoft Windows and deploying VMware vCenter Server Appliance™. Ensure that all of these prerequisites are in place before proceeding with a new installation or an upgrade.
- DNS – Ensure that resolution is working for all system names via fully qualified domain name (FQDN), short name (host name), and IP address (reverse lookup).
- Time – Ensure that time is synchronized across the environment.
- Passwords – vCenter Single Sign-On passwords must contain only ASCII characters; non-ASCII and extended (or high) ASCII characters are not supported.
Installing vCenter Server 6.0 on a Windows Server requires a Windows 2008 SP2 or higher 64-bit operating system (OS). Two options are presented: Use the local system account or use a Windows domain account. With a Windows domain account, ensure that it is a member of the local computer’s administrator group and that it has been delegated the “Log on as a service” right and the “Act as part of the operating system” right. This option is not available when installing an external Platform Services Controller.
Windows installations can use either a supported external database or a local PostgreSQL database that is installed with vCenter Server and is limited to 20 hosts and 200 virtual machines. Supported external databases include Microsoft SQL Server 2008 R2, SQL Server 2012, SQL Server 2014, Oracle Database 11g, and Oracle Database 12c. When upgrading to vCenter Server 6.0, if SQL Server Express was used in the previous installation, it will be replaced with PostgreSQL. External databases require a 64-bit DSN. DSN aliases are not supported.
When upgrading vCenter Server to vCenter Server 6.0, only versions 5.0 and later are supported. If the vCenter Server system being upgraded is not version 5.0 or later, such an upgrade is required first.
Table 2 outlines minimum hardware requirements per deployment environment type and size when using an external database. If VMware vSphere Update Manager™ is installed on the same server, add 125GB of disk space and 4GB of RAM.
Download a full VMware vCenter Server™ 6.0 Deployment Guide
VMware just recently released Update 2 for vSphere 6.0. Update 2 is full of new features and bug fixes for both ESXi and vCenter Server. For a complete list of features and bug fixes make sure to review the relhttp://vmware360.com/wp-admin/plugin-editor.phpease notes for ESXi and vCenter Server. There are few features that stood out to me in this update. The Embedded Host Client is now integrated into ESXi and fully supported as of Update 2. VSAN 6.2 is feature rich with everything but the kitchen sink in this release. Two factor authentication support for the vSphere Web Client is now available in the PSC UI. Here’s a breakdown of what’s new in vSphere 6.0 Update 2.
VMware Embedded Host Client (EHC)
The Embedded Host Client (EHC) started out as a fling and now is a supported product in vSphere 6.0 Update 2. The EHC is now installed as part of ESXi 6.0U2 and provides the ability to manage any ESXi host using a web browser. After a host is installed with or upgraded to 6.0 U2, open a web browser and enter https://”FQDN or IP of host”/ui. More information on the Embedded Host Client can be found by reviewing the release notes.
Virtual SAN 6.2 (VSAN)
Note: VSAN is a separate product and is licensed separately
If you thought this update couldn’t get any bigger, think again. Virtual SAN 6.2 is here and Jam-packed with new features. This release of VSAN now supports compression and deduplication. When enabled on a disk group redundant copies of data are reduced to single copy. There’re also new services related to performance, space savings and health of the cluster. The Health service monitors the VSAN cluster for issues and provides diagnostics. Performance service collects and analyzes performance statistics. Performance service starts at the cluster down the to the disk level. You want space savings reports, that’s included. Space reporting displays information of used and free space with a detailed breakdown. These are just a few of the new features in Virtual SAN 6.2. For more information check out the Virtual Blocks blog.
- What’s New – VMware Virtual SAN 6.2
- Virtual SAN 6.2 Certification & Compatibility Guide Updates
- Virtually Speaking Podcast Episode 3 – VSAN 6.2
- 3 Reasons Why Storage Field Day 9 Was the Best One Yet!
vSphere APIs for I/O filtering (VAIO) Enhancement
vSphere 6.0 Update 2 also includes updates to vSphere APIs for I/O filtering (VAIO). If you are not familiar with VAIO I highly recommend you read the following blog post by Ken Werneburg.
- VASA provider in a pure IPv6 environment
- VMIOF 1.0 and 1.1
High Ethernet Link Speed
ESXi hosts can now support 25G and 50G ethernet speeds.
vCenter Server – Two-factor authentication for vSphere Web client
vCenter Single Sign On allows authentication to the vSphere Web Client via username and password. vSphere 6.0 Update 2 introduces two-factor authentication supporting RSA SecurID and Smart card. RSA SecurID is configured using the SSO-Config utility. It also requires RSA Authentication Manager in your environment. Once setup, login to the vSphere Web Client with your username and RSA passcode. Mike Foley has an excellent two part blog post walking through RSA SecurID setup.
- Two Factor Authentication for vSphere – RSA SecurID – Part 1
- Two Factor Authentication for vSphere – RSA SecurID – Part 2
Smart card authentication as mentioned above is also supported. Many large enterprises and government agencies use smart cards to meet security regulations. Smart Cards such as Common Access Card (CAC) are used at a machines with a smart card reader. Smart Card Authentication can be configured from the Platform Services Controller UI or using SSO-Config utility. Stay tuned as Mike Foley will be discussing Smart card authentication in a future post.
In addition to two factor authentication, the vSphere Web Client now supports the ability to add a login banner. The Login Banner can be configured from the Platform Services Controller UI by adding a title and message.
An added layer of consent ensures the user can not login without acknowledging the Login Banner.
vCenter Server Appliance update status might be stuck at 70 percent
vSphere 6.0 Update 1b had a bug when using the virtual appliance management interface (VAMI) to update. The UI would hang at 70 percent, although the update had completed. The only way to verify the status of the upgrade was by checking the update log – /var/log/vmware/applmgmt/software-packages.log. This bug has been fixed in vSphere 6.0 Update 2 displaying 100 percent in the VAMI when the update is complete.
Support to change vSphere ESX Agent Manger Logging Level
vSphere Web Client support for Windows 10 operating system
vCenter Server now supports the following external databases
- Microsoft SQL Server 2012 Service Pack 3
- Microsoft SQL Server 2014 Service Pack 1
vCenter Server now supports multiple embedded to multiple PSC migrations in a single SSO domain
vSphere 6.0 Update 1 introduced the ability to reconfigure and repoint using CMSSO-UTIL. This is handy when going from a vCenter with an embedded PSC to an external PSC deployment in the same SSO domain. vSphere 6.0 Update 1 would not allow having two external PSCs and trying to repoint. The result was the following error:
vSphere 6.0 U2 now allows having multiple external PSCs with the use of the repoint command. The diagram below represent two embedded deployments replicating to each other. This deployment model is considered deprecated. The term deprecated means the topology will be supported in vSphere 6.0 but not in future releases. To get out of this deprecated topology two external Platform Services Controllers have been deployed. Now we can using the reconfigure command in CMSSO-Util to remove the embedded PSC and repoint vCenter Server to the external PSC.
In this video, we’ll look at how to use VMware vSphere HA to protect virtual machines. VMware vSphere HA protects virtual machines from three types of failures. It will protect you if the ESX host the VM’s running on fails, if the guest OS inside the VM fails, or if an application inside the VM fails.
This video is the tenth in a new series of free Webinars that we are releasing in which our Technical Support staff members present on various topics across a wide range of VMware’s product portfolio.
The title for this presentation is “What is new in vSphere 6” and it goes through what is new and what has changed since the previous vSphere 5.5 release.
To see the details of upcoming webinars in this series, see the Support Insider Blog post at http://blogs.vmware.com/kb/2015/02/ne…
NOTE:NOTE: This video is roughly 35 minutes in length so it would be worth blocking out some time to watch it!