Apr 24

Best Practices for using VMware Converter

This video provides an overview of the best practices for converting a machine with VMware Converter. This video is based on VMware knowledge base article 1004588. This video also provides tips to consider when converting your machine. The video can help you avoid some of these errors:
Unknown error returned by VMware Converter Agent
Out of disk space
Failed to establish Vim connection
Import host not found
P2VError UFAD_SYSTEM_ERROR(Internal Error)
Pcopy_CloneTree failed with err=80
The file exists (80)
Failed to connect
Giving up trying to connect
Failed to take snapshot of the source volume
stcbasic.sys not installed or snapshot creation failed. err=2
Can’t create undo folder
sysimage.fault.FileCreateError
sysimage.fault.ReconfigFault
sysimage.fault.PlatformError
Number of virtual devices exceeds maximum for a given controller
TooManyDevices
QueryDosDevice: ret=270 size=1024 err=0
Error opening disk device: Incorrect function (1)
Vsnap does not have admin rights
Specified key identifier already exists
vim.fault.NoDiskSpace
Check out Amazon’s selection of books on VMware: http://amzn.to/2pZInmt

Rating: 5/5


Jun 14

VMware vCenter Server 6.0 Performance and Best Practices

Introduction

VMware vCenter Server™ 6.0 substantially improves performance over previous vCenter Server versions. This paper demonstrates the improved performance in vCenter Server 6.0 compared to vCenter Server 5.5, and shows that vCenter Server with the embedded vPostgres database now performs as well as vCenter Server with an external database, even at vCenter Server’s scale limits. This paper also discusses factors that affect vCenter Server performance and provides best practices for vCenter Server performance.

What’s New in vCenter Server 6.0

vCenter Server 6.0 brings extensive improvements in performance and scalability over vCenter Server 5.5:

  • Operational throughput is over 100% higher, and certain operations are over 80% faster.
  • VMware vCenter Server™ Appliance™ now has the same scale limits as vCenter Server on Windows with an external database: 1,000 ESXi hosts, 10,000 powered-on virtual machines, and 15,000 registered virtual machines.
  • VMware vSphere® Web Client performance has improved, with certain pages over 90% faster.

In addition, vCenter Server 6.0 provides new deployment options:

  • Both vCenter Server on Windows and VMware vCenter Server Appliance provide an embedded vPostgres database as an alternative to an external database. (vPostgres replaces the SQL Server Express option that was available in previous vCenter versions.)
  • The embedded vPostgres database supports vCenter’s full scale limits when used with the vCenter Server Appliance.

Performance Comparison with vCenter Server 5.5

In order to demonstrate and quantify performance improvements in vCenter Server 6.0, this section compares 6.0 and 5.5 performance at several inventory and workload sizes. In addition, this section compares vCenter Server 6.0 on Windows to the vCenter Server Appliance at different inventory sizes, to highlight the larger scale limits in the Appliance in vCenter 6.0. Finally, this section illustrates the performance gained by provisioning vCenter with additional resources.

The workload for this comparison uses vSphere Web Services API clients to simulate a self-service cloud environment with a large amount of virtual machine “churn” (that is, frequently creating, deleting, and reconfiguring virtual machines). Each client repeatedly issues a series of inventory management and provisioning operations to vCenter Server. Table 1 lists the operations performed in this workload. The operations listed here were chosen from a sampling of representative customer data. Also, the inventories in this experiment used vCenter features including DRS, High Availability, and vSphere Distributed Switch. (See Appendix A for precise details on inventory configuration.)

Operations performed in performance comparison workload

Results

Figure 3 shows vCenter Server operation throughput (in operations per minute) for the heaviest workload for each inventory size. Performance has improved considerably at all sizes. For example, for the large inventory setup (Figure 3, right), operational throughput has increased from just over 600 operations per minute in vCenter Server 5.5 to over 1,200 operations per minute in vCenter Server 6.0 for Windows: an improvement of over 100%.
The other inventory sizes show similar gains in operational throughput.

vCenter Server 6.0 operation throughput

Figure 3. vCenter throughput at several inventory sizes, with heavy workload (higher is better). Throughput has increased at all inventory sizes in vCenter Server 6.0.

Figure 4 shows median latency across all operations in the heaviest workload for each inventory size. Just as with operational throughput in Figure 3, latency has improved at all inventory sizes. For example, for the large inventory setup (Figure 4, right), median operational latency has decreased from 19.4 seconds in vCenter Server 5.5 to 4.0 seconds in vCenter Server Appliance 6.0: a decrease of about 80%. The other inventory sizes also show large decreases in operational latency.

vCenter Server median latency at several inventory sizes

Figure 4. vCenter Server median latency at several inventory sizes, with heavy workload (lower is better). Latency has decreased at all inventory sizes in vCenter 6.0.

Download

Download a full VMware vCenter Server 6.0 Performance and Best Practices Technical White Paper

Rating: 5/5


Jun 11

Oracle Databases on VMware Best Practices Guide

Introduction

This Oracle Databases on VMware Best Practices Guide provides best practice guidelines for deploying Oracle databases on VMware vSphere®. The recommendations in this guide are not specific to any particular set of hardware, or size and scope of any particular Oracle database implementation. The examples and considerations provide guidance, but do not represent strict design requirements.

The successful deployment of Oracle on vSphere 5.x/6.0 is not significantly different from deploying Oracle on physical servers. DBAs can fully leverage their current skill set while also delivering the benefits associated with virtualization.

In addition to this guide, VMware has created separate best practice documents for storage, networking, and performance.

This document also includes information from two white papers, Performance Best Practice for VMware vSphere 5.5 and Performance Best Practices for VMware vSphere 6.0

VMware Support for Oracle Databases on vSphere

Oracle has a support statement for VMware products (MyOracleSupport 249212.1). While there has been much public discussion about Oracle’s perceived position on support for VMware virtualization, experience shows that Oracle Support upholds its commitment to customers, including those using VMware virtualization in conjunction with Oracle products.

VMware is also an Oracle customer. The E-Business Suite and Siebel implementations of VMware IT are virtualized. VMware routinely submits and receives assistance with issues for Oracle running on VMware virtual infrastructure. The MyOracleSupport (MetaLink) Document ID 249212.1 provides the specifics of Oracle’s support commitment to VMware. Gartner, IDC, and others also have documents available to their subscribers that specifically address this policy.

VMware Oracle Support Process

VMware support will accept tickets for any Oracle-related issue reported by a customer and will help drive the issue to resolution. To augment Oracle’s support document, VMware also has a total ownership policy for customers with Oracle issues as described in the letter at VMware® Oracle Support Affirmation.

By being accountable, VMware Support will drive the issue to resolution regardless of which vendor (VMware, Oracle or other) is responsible for the resolution. In most cases, reported issues can be resolved through configuration changes, bug fixes, or feature enhancements by one of the involved vendors. VMware is committed to its customer’s success and supports their choice to run Oracle software in modern, virtualized environments. For further information, see https://www.vmware.com/support/policies/oracle-support

VMware vSphere Oracle Support Process

Figure 1 – VMware vSphere Oracle Support Process

Download

Download a full Oracle Databases on VMware Best Practices Guide

Rating: 5/5


May 15

VMware vSphere® Distributed Switch Best Practices

Introduction

This paper provides best practice guidelines for deploying the VMware vSphere® distributed switch (VDS) in a vSphere environment. The advanced capabilities of VDS provide network administrators with more control of and visibility into their virtual network infrastructure. This document covers the di!erent considerations that vSphere and network administrators must take into account when designing the network with VDS. It also discusses some standard best practices for configuring VDS features.

The paper describes two example deployments, one using rack servers and the other using blade servers. For each of these deployments, di!erent VDS design approaches are explained. The deployments and design
approaches described in this document are meant to provide guidance as to what physical and virtual switch parameters, options and features should be considered during the design of a virtual network infrastructure. It is important to note that customers are not limited to the design options described in this paper. The flexibility of the vSphere platform allows for multiple variations in the design options that can fulfill an individual customer’s unique network infrastructure needs.

This document is intended for vSphere and network administrators interested in understanding and deploying VDS in a virtual datacenter environment. With the release of vSphere 5, there are new features as well as enhancements to the existing features in VDS. To learn more about these new features and enhancements, refer to the What’s New in Networking Technical White Paper.

Readers are also encouraged to refer to basic virtual and physical networking concepts before reading through this document.

For physical networking concepts, readers should refer to any physical network switch vendor’s documentation.

Design Considerations

The following three main aspects influence the design of a virtual network infrastructure:
1) Customer’s infrastructure design goals
2) Customer’s infrastructure component configurations
3) Virtual infrastructure traffic requirements

Let’s take a look at each of these aspects in a little more detail.

Infrastructure Design Goals

Customers want their network infrastructure to be available 24/7, to be secure from any attacks, to perform efficiently throughout day-to-day operations, and to be easy to maintain. In the case of a virtualized environment, these requirements become increasingly demanding as growing numbers of business-critical applications run in a consolidated setting. These requirements on the infrastructure translate into design decisions that should incorporate the following best practices for a virtual network infrastructure:

  • Avoid any single point of failure in the network
  • Isolate each traffic type for increase resiliency and security
  • Make use of traffic management and optimization capabilities.

Infrastructure Component Configurations

In every customer environment, the utilized compute and network infrastructures di!er in terms of
configuration, capacity and feature capabilities. These di!erent infrastructure component configurations
influence the virtual network infrastructure design decisions. The following are some of the configurations and features that administrators must look out for:

  • Server configuration: rack or blade servers
  • Network adapter configuration: 1GbE or 10GbE network adapters; number of available adapters; offload function on these adapters, if any
  • Physical network switch infrastructure capabilities: switch clustering

It is impossible to cover all the different virtual network infrastructure design deployments based on the various combinations of type of servers, network adapters and network switch capability parameters. In this paper, the following four commonly used deployments that are based on standard rack server and blade server configurations are described:

  • Rack server with eight 1GbE network adapters
  • Rack server with two 10GbE network adapters
  • Blade server with two 10GbE network adapters
  • Blade server with hardware-assisted multiple logical Ethernet network adapters

It is assumed that the network switch infrastructure has standard layer 2 switch features (high availability, redundant paths, fast convergence, port security) available to provide reliable, secure and scalable connectivity to the server infrastructure.

Virtual Infrastructure Traffic

vSphere virtual network infrastructure carries different traffic types. To manage the virtual infrastructure traffic effectively, vSphere and network administrators must understand the different traffic types and their characteristics. The following are the key traffic types that flow in the vSphere infrastructure, along with their traffic characteristics:

Management traffic: This traffic flows through a vmknic and carries VMware ESXi host-to-VMware vCenter configuration and management communication as well as ESXi host-to-ESXi host high availability (HA) – related communication. This traffic has low network utilization but has very high availability and security requirements.

VMware vSphere vMotion traffic: With advancement in vMotion technology, a single vMotion instance can consume almost a full 10Gb bandwith, A maximum of eight simultaneous vMotion instances can be performed on a 10Gb uplink; four simultaneous vMotion instances are allowed on a 1 Gb uplink. vMotion traffic has very high network utilization and can be bursty at times. Customers must make sure that vMotion traffic doesn’t impact other traffic types, because it might consume all available I/O resources. Another property of vMotion traffic is that it is not sensitive to throttling and makes a very good candidate on which to perform traffic management.

Fault-tolerent traffic: When VMware Fault Tolerance (FT) logging is enabled for a virtual machine, all the logging traffic is sent to the secondary fault-tolerent virtual machine over a designated vmknic port, This process can require a considerable amount of bandwith at low latency because it replicate the I/O traffic and memory-state information to the secondary virtual machine.

ISCSI/NFS traffic: IP storage traffic is carried over vmknic ports. This traffic varies according to disk I/O requests. With end-to-end jumbo frame configuration, more data is transferred with each Ethernet frame, decreasing the number of frame on the network. This larger frame reduces the overhead on servers/targets and improves the IP storage performance. On the other hand, congested and lower-speed networks can cause latency issues that disrupt access to IP storage. It is recommended that users provide a high-speed path for IP storage and avoid any congestion in the network infrastructure.

Virtual machine traffic: Depending on the workloads that are running on the guest virtual machine, the traffic patterns will vary from low to high network utilization. Some of the applications running in virtual machines might be latency sensitive as is the case with VOIP workloads.

Table 1 summarize the characteristics of each traffic type.

vCenter Server median latency at several inventory sizes

Table 1. Traffic Types and Characteristics.

To understand the different traffic flows in the physical network infrastructure, network administrators use network traffic management tools. These tools help monitor the physical infrastructure traffic but do not providevisibility into virtual infrastructure traffic. With the release of vSphere 5, VDS now supports the NetFlow feature, which enables exporting the internal (virtual machine-to-virtual machine) virtual infrastructure flow information to standard network management tools. Administrators now have the required visibility into virtual infrastructure traffic. This helps administrators monitor the virtual network infrastructure traffic through a familiar set of network management tools. Customers should make use of the network data collected from these tools during the capacity planning or network design exercises.

Example Deployment Components

After looking at the different design consideration, this section provides a list of components that are used in an example deployment. This example deployment helps illustrate some standard VDS design approaches.
The following are some common components in the virtual infrastructure. The list doesn’t include storage
components that are required to build the virtual infrastructure. It is assumed that customers will deploy IP storage in this example deployment.

Hosts

Four ESXi provide compute, memory and network resources according to the configuration of the hardware. Customers can have different numbers of hosts in their environment, based on their needs. One VDS can span across 350 hosts. This capability to support large numbers of hosts provides the required scalability to build a private or public cloud environment using VDS.

Clusters

A cluster is a collection of ESXi hosts and associated virtual machines with shared resources. Customers can have as many clusters in their deployment as are required. With one VDS spanning across 350 hosts, customers have the flexibility of deploying multiple clusters with a different number of hosts in each cluster. For simple illustration purposes, two clusters with two hosts each are considered in this example deployment. One cluster can have a maximum of 32 hosts.

VMware vCenter Server

VMware vCenter Server centrally manages a vSphere environment. Customers can manage VDS through this centralized management tool, which can be deployed on a virtual machine or a physical host. The vCenter Server system is not shown in the diagrams, but customers should assume that it is present in this example deployment. It is used only to provision and manage VDS configuration. When provisioned, hosts and virtual machine networks operate independently of vCenter Server. All components required for network switching reside on ESXi hosts. Even if the vCenter Server system fails, the hosts and virtual machines will still be abler to communicate.

Network Infrastructure

Physical network switches in the access and aggregation layer provide connectivity between ESXi hosts and to the external world. These network infrastructure components support standard layer 2 protocols providing secure and reliable connectivity.
Along with the preceding four components of the physical infrastructure in this example deployment, some of the virtual infrastructure traffic types are also considered during the design. The following section describes the different traffic types in the example deployment.

Virtual Infrastructure Traffic Types

In this example deployment, there are standard infrastructure traffic types, including iSCSI, vMotion, FT, management and virtual machine. Customers might have other traffic types in their environment, based on their choice of storage infrastructure (FC, NFS, FCoE). Figure 1 shows the different traffic types along with associated port groups on an ESXi host. It also shows the mapping of the network adapters to the different port groups.

vCenter Server median latency at several inventory sizes

Table 1. Different Traffic Types Running on a Host.

Download

Download a full VMware vSphere® Distributed Switch Best Practices Technical White Paper

Rating: 5/5


Dec 24

VMware vSphere 5.5 SAN Storage Best Practices

In this video we will demonstrate the configuration of block-level storage (SAN) devices for VMware vSphere. During the demonstration we will configure Stora…

Rating: 5/5