Apr 17

vSAN 6.7 What’s New Technical

vSAN 6.7 introduces a number of key features that help us provide an HCI solution for customers that want to evolve without risk, lower their TCO, and accommodate the demands of IT environments for today, tomorrow, and beyond. To help customers evolve their data center with HCI, the improvements of vSAN 6.7 focused on enabling customers to improve their experience in three key areas: Intuitive Operations Experience, Consistent Application Experience, and Enhanced Support Experience.

Rating: 5/5


Apr 17

vSAN 6.7 Technical Overview

This video introduces VMware’s Software Designed Enterprise Class Storage Solution vSAN. vSAN powers industry-leading Hyper-Converged Infrastructure solutions with a vSphere-native, high-performance architecture.

NOTE: This video is roughly 30 minutes in length so it would be worth blocking out some time to watch it!

Rating: 5/5


Feb 14

vSphere 5.5 and vSAN 5.5 End of General Support Reminder

Himanshu Singh posted February 14, 2018

We would like to remind you that the End of General Support (EOGS) for vSphere 5.5 and vSAN 5.5 is September 19, 2018.

To maintain your full level of Support and Subscription Services, VMware recommends upgrading to vSphere 6.5 or 6.7. Note that by upgrading to vSphere 6.5 or 6.7 you not only get all the latest capabilities of vSphere but also the latest vSAN release and capabilities.

vCloud Suite 5 and vSphere with Operations Management (vSOM) customers running vSphere 5.5 are also recommended to upgrade to vSphere 6.5 or 6.7. For more information on the benefits of upgrading and how to upgrade, visit the VMware vSphere Upgrade Center.

For detailed technical guidance, visit vSphere Central and the vSphere 6.5 Topology and Upgrade Planning Tool. VMware has extended general support for vSphere 6.5 to a full five years from date of release, which will end on November 15, 2021. This same date applies to vSphere 6.7 end of general support as well.

If you require assistance upgrading to a newer version of vSphere, VMware’s vSphere Upgrade Service is available. This service delivers a comprehensive guide to upgrading your virtual infrastructure including recommendations for planning and testing the upgrade, the actual upgrade itself, validation guidance, and rollback procedures. For more information, contact your VMware account team, VMware Partner, or visit VMware Professional Services.

If you are unable to upgrade from vSphere 5.5 before EOGS and are active on Support and Subscription Services, you may purchase Extended Support in one-year increments for up to two years beyond the EOGS date. Visit VMware Extended Support for more information.

Technical Guidance for vSphere 5.5 is available until September 19, 2020 primarily through the self-help portal. During the Technical Guidance phase, VMware will not offer new hardware support, server/client/guest OS updates, new security patches or bug fixes unless otherwise noted. For more information, visit VMware Lifecycle Support Phases.

Listed below are a number of additional actions which need to be taken, depending on your individual scenario:

vSphere with Operations Management (vSOM)

This bundle of vSphere and vRealize Operations allows you to upgrade the versions of individual components independent of each other. If you are using vSphere 5.5 as part of vSOM, you will need to upgrade your vSphere with Operations Management 5.5 license key, to be able to upgrade the vSphere component. You can reference the VMware Lifecycle Product Matrix to check for the EOGS date for the version of vRealize Operations you are using and the VMware Product Interoperability Matrices for the product version compatibility.

vCloud Suite 5

This bundle of vSphere and VMware’s management products will also require an upgrade of your license key to vCloud Suite 7 or later. Upgrading to vCloud Suite 2017 is encouraged to leverage the vRealize Suite 2017 multi-vendor hybrid cloud management platform. You can reference the VMware Lifecycle Product Matrix to check the EOGS date for each version of the products in the bundle and the VMware Product Interoperability Matrices for the product version compatibility.

vSAN 5.5

This product is embedded in the vSphere 5.5 kernel and by upgrading vSphere you will also upgrade vSAN to a newer release. You will need to upgrade your vSAN 5.5 license key to a newer release license key. Please confirm hardware compatibility by referencing the vSAN Compatibility Guide and if necessary, make appropriate hardware upgrades as needed to maintain compatibility.

If you are using vSphere 5.5 or vCloud Suite 5, please contact your VMware account team or a VMware Partner with any questions and to begin an upgrade plan.

Thank you,

The VMware Team

About the Author

Himanshu Singh is Group Manager of Product Marketing for VMware’s Cloud Platform business. His extensive past experience in the technology industry includes driving cloud management solutions at VMware, growing the Azure public cloud business at Microsoft, as well as delivering and managing private clouds for large enterprise customers at IBM. Himanshu has been a frequent speaker at VMworld, Dell Technologies World, vForum, VMUG, Microsoft TechEd, and other industry conferences. He holds a B.Eng. (Hons.) degree from Nanyang Technological University, Singapore, and an MBA from Tuck School of Business at Dartmouth College. Follow him on twitter as @himanshuks.

Rating: 5/5


Apr 28

vSAN Deep Dive under 80 minutes

This session covers basic to advance vSAN topic. Watch this video if you to learn basics and few of the advance areas of vSAN.
NOTE: This video is roughly 60 minutes in length so it would be worth blocking out some time to watch it!

Rating: 5/5


Apr 13

vSAN 6.6 Technical Overview Q1 FY18

Announcing the faster, more cost effective and more secure VMware vSAN 6.6! Learn what’s new, the benefits, and incentives at the Launch Resource Center:

Rating: 5/5


Mar 15

VSAN 6.2 Architectural Overview

Join me and my special guest Steve Tuomey to discuss the VSAN 6.2 architecture, including hybrid and all flash utilization.

Rating: 5/5


Mar 14

Working with Virtual SAN Storage Policies

This video shows you how to create or modify a Virtual SAN storage policy, how to assign a policy to VMs and other objects, and how to check policy compliance.

Rating: 5/5


Oct 18

What’s New with VMware Virtual SAN 6.5

Introducing Virtual SAN 6.5

vSAN 6.5VMware Virtual SAN 6.5 is the latest release of the market-leading, enterprise-class storage solution for hyper-converged infrastructure (HCI). Virtual SAN 6.5 builds on the existing features introduced in 6.2 by enhancing automation, further reducing total cost of ownership (TCO), and setting the stage for next-generation cloud native applications.

Virtual SAN continues to see rapid adoption with more than 5000 customers utilizing the solution for a number of use cases including mission-critical production applications and databases, test and development, management infrastructures, disaster recovery sites, virtual desktop deployments, and remote office implementations. Virtual SAN is used by 400+ Fortune-1000 organizations across every industry vertical in more than 100 countries worldwide.

Let’s take a look at the new features included with Virtual SAN 6.5…

Accelerate Responsiveness

The Virtual SAN API and vSphere PowerCLI have been updated in this release. It is now possible to automate the configuration and management of cluster settings, disk groups, fault domains, and stretched clusters. Activities such as maintenance mode and cluster shutdown can also be scripted. This video demonstrates some of the capabilities of of the Virtual SAN API and PowerCLI: Creating a Cluster and Configuring Virtual SAN PowerCLI can be used to monitor the health of a Virtual SAN cluster. Health issue remediation and re-sync activities can be automated with this latest release.

20-50% Additional TCO Savings

Now that flash devices have become the preferred choice for storage, it makes sense to adjust the Virtual SAN licensing model to account for this change in the industry. All Virtual SAN 6.5 licenses include support for both hybrid and all-flash configurations. Please note, however, that deduplication, compression, and erasure coding still require Virtual SAN Advanced or Enterprise licenses. Adding support for the use of all-flash configurations with all licensing editions provides organizations more deployment options and the ability to take advantage of increased performance while minimizing licensing costs.

vSAN 6.5Virtual SAN supports the use of network crossover cables in 2-node configurations. This is especially beneficial in use cases such as remote office and branch office (ROBO) deployments where it can be cost prohibitive to procure, deploy, and manage 10GbE networking equipment at each location. This configuration also reduces complexity and improves reliability.

While we are on the subject of ROBO deployments, it is also important to mention a related Virtual SAN licensing change. previously did not support the use of all-flash Virtual SAN cluster configurations and the corresponding space efficiency features. A new license has been added with the release of Virtual SAN 6.5 and it is called >strong>Virtual SAN for ROBO Advanced. This new license includes support for using deduplication, compression, and erasure coding. Using these features lowers the cost-per-usable-GB of flash storage, which further reduces TCO. Organizations get the best of both worlds: The extreme performance of flash at a cost that is on par with or lower than similar hybrid solutions.

Increased Flexibility

Virtual SAN 6.5 extends workload support to physical servers and clustered applications with the introduction of an iSCSI target service. Virtual SAN continues its track record of being radically simple by making it easy to access Virtual SAN storage using the iSCSI protocol with just a few vSphere Web Client mouse clicks. iSCSI targets on Virtual SAN are managed the same as other objects with Storage Policy Based Management (SPBM). Virtual SAN functionality such as deduplication, compression, mirroring, and erasure coding can be utilized with the iSCSI target service. CHAP and Mutual CHAP authentication is supported.

Enable vSAN iSCSI target service

Enable vSAN iSCSI target service

Utilizing Virtual SAN for physical server workloads and clustered applications can reduce or eliminate the dependency on legacy storage solutions while providing the benefits of Virtual SAN such as simplicity, centralized management and monitoring, and high availability.

Scale To Tomorrow

Photon OS New application architecture and development methods have emerged that are designed to run in today’s mobile-cloud era. For example,“DevOps” is a term that describes how these next-generation applications are developed and operated. “Container” technologies such as Docker and Kubernetes are a couple of the many solutions that have emerged as options for deploying and orchestrating these applications. Cloud native applications naturally require persistent storage just the same as traditional applications. Virtual SAN is an excellent choice for next-generation cloud native applications. Here are a few examples of the efforts that are underway:

vSphere Integrated Containers Engine is a container runtime for vSphere, allowing developers familiar with Docker to develop in containers and deploy them alongside traditional virtual machine workloads on vSphere clusters. vSphere Integrated Containers Engine enables these workloads to be managed through the vSphere GUI in a way familiar to vSphere admins. Availability and performance features in vSphere and Virtual SAN can be utilized by vSphere Integrated Containers Engine just the same as traditional virtual machine environments.

Docker Volume Driver for vSphere enables users to create and manage Docker container data volumes on vSphere storage technologies such as VMFS, NFS, and Virtual SAN. This driver makes it very simple to use containers with vSphere storage and provides the following key benefits:

– DevOps-friendly API for provisioning and policy configuration.
– Seamless movement of containers between vSphere hosts without moving data.
– Single platform to manage – run virtual machines and containers side-by-side

Next-Gen Hardware Support

vSphere 6.5 and Virtual SAN 6.5 also introduce support for 512e drives, which will enable larger capacities to meet the constantly growing space requirements of today’s and tomorrow’s applications. New hardware innovations such as NVMe provide dramatic performance gains for Virtual SAN with up to 150k IOPS per host. This level of performance combined with the ability to scale up to 64 hosts in a single cluster sets the stage for running any app, any scale on Virtual SAN.

Visit Virtual SAN on vmware.com and VMware StorageHub for more details on this exciting new release of Virtual SAN.

To learn more about vSphere 6.5, please see the following resources.

@jhuntervmware on twitter

Rating: 5/5


Sep 30

VMware Virtual SAN Performance Testing – Part I

Wade Holmes posted September 12, 2014
vSAN Architecture

Virtual SAN Performance testing

As people begin to assess, design, build, and deploy VMware Virtual SAN based solutions for the first time, there is great curiosity in understanding the performance expectations to have, and results one can achieve when utilizing Virtual SAN in specific configurations. Most customers are running some type of benchmark in proof-of-concept environments in order to gauge the performance of VMware Virtual SAN in their environment. In working with customers and partners, we have seen a variety of methods used in attempting to benchmark and analyze Virtual SAN performance. In order to ease this process, we are developing guidance on how best to perform performance testing on Virtual SAN. This guidance will be presented in a four part series as follows:

  • Virtual SAN Performance Testing Part I – Utilizing I/O Analyzer with Iometer
  • Virtual SAN Performance Testing Part II – Utilizing I/O Analyzer with Application Trace Files
  • Virtual SAN Performance Testing Part III – Utilizing Custom Application Trace files
  • Virtual SAN Performance Testing Part IV – Analyzing Performance Results

Virtual SAN Performance Design Principles

Before we delve into performance testing methodology and configuration recommendations, first lets discuss some Virtual SAN performance concepts. Virtual SAN was purpose built to be an ideal platform for server consolidation of virtualized workloads. A key design principle of Virtual SAN is to optimize for aggregate consistent performance in a dynamic environment over localized individual performance.simulated by many artificial benchmark tests. This adheres to the principle of vSphere and Virtual SAN enabling hyper-converged environments. One way in which Virtual SAN does this is through minimizing the IOblender effect in virtualized environments.
IOblender VSAN
The IOblender effect is caused by multiple virtual machines simultaneously sending I/O to a storage subsystem, causing sequential I/O to become highly randomized. This can increase latency in your storage solution. Virtual SAN mitigates this through a hybrid design, using a flash acceleration layer that acts as a read cache and write buffer combined with spinning disk for capacity. The majority of reads and all writes will be served by flash in a properly sized Virtual SAN solution, allowing for excellent performance in environments with highly random I/O. When data does need to be destaged to spinning disk from the flash acceleration layer, this destaging operation will predominately consist of sequential I/O, efficiently taking advantage of the full IO capability of the underlying spinning disk.

A second design principle used in optimizing Virtual SAN for aggregate consistent performance is not depending on data locality to guarantee performance. This concept is reviewed in-depth in the Understanding Data Locality in VMware Virtual SAN whitepaper. This is key as vSphere balances compute resources in an automated fashion through Distributed Resource Scheduler (DRS) initiated vMotion operations.

Selecting Your Proof-of-Concept Testbed

When you plan and design your Virtual SAN solution, you will need to start out by choosing a solution building block that maps most closely to your expected performance requirements. Whether the Virtual SAN solution will be to build you own Virtual SAN solution (with guidance from the Virtual SAN Hardware Quick Reference Guide as starting point), selecting a vendor specific Ready Node option, or using a Virtual SAN based EVO:Rail solution, the target platform depends on what fits your environment best. When building your own solution, you must adhere to guidance on the VMware Compatibility Guide (VCG) for Virtual SAN. This is the hardware capability list that acts as the source of truth defining supported hardware with which you can build a Virtual SAN solution.

Performance Testing Methodology

The methodology described in this series can be utilized for any storage subsystem supported by vSphere, whether it be VMware Virtual SAN, another scale-out storage solution, or a traditional array. As we get into analyzing performance results, there will be specific Virtual SAN tools recommended (such as Virtual SAN Observer) used in combination with vSphere based tools such as esxtop and vscsistats.

For optimal performance during performance tests, we recommend 10 GbE uplinks that operate at line rate. For more information around network recommendations, see the recently published Virtual SAN Network Design Guide.

The next step is sizing your Virtual SAN solution adequately. To assist with sizing, we have developed the Virtual SAN sizing tool. For optimal performance, we recommend that the active working set of your virtual machine fit into the flash acceleration layer of Virtual SAN. But you may ask, how do you measure the “active working set” size of a VM. From our experience, a good conservative estimate is 10% of your used capacity, not taking into account capacity utilized for failure to tolerate redundancy policy.

Performance Testing Tools

There are a number of tools out there that can be utilized to test the performance of storage subsystems. To efficiently test a scale-out storage system, we recommend utilizing VMware I/O Analzer as a standard tool in testing scale-out storage systems on vSphere. I/O Analyzer is supplied as an easy-to-deploy virtual appliance and automates storage performance testing and analysis through a unified web interface that can be used to configure and deploy storage tests and view graphical results for those tests. I/O Analyzer can be utilized to run either iometer based workloads, or use application trace replays. In this post we will focus on using I/O Analyzer with iometer, but will delve into trace replay usage in part II and III of this blog post series.

If you are evaluating using another tool to test Virtual SAN, we recommend utilizing a tool that can perform multiple Outstanding IOs (OIO). For this reason, tools such as dd and SIO are not recommended for performance testing of Virtual SAN, as they can only be configured to run tests that sequentially issue only a single OIO.

SPBM Policy Configuration for Performance Testing

Virtual SAN supports the configuration of per object policies that impact performance and availability. The policies applicable to performance testing include

  • Flash Read Cache Reservation – In general we do not recommend utilizing this policy during performance testing of server workloads. Any reservation will reserve a portion of the 70% allocation of flash read cache to an object, whether it is needed or not. We recommend letting the Virtual SAN algorithms handle read cache reservation automatically. This policy is utilized by default for Horizon 6 with View when utilizing linked clones, but only for the Horizon replica object.
  • Stripe Width – Specifically for performance testing, this policy may be used if you are utilizing a single virtual machine/vmdk to test performance. This will allow that vmdk to be split into multiple components , and allow those components to be spread across your cluster. Recommendation is to set the stripe with equal to the # of nodes in the cluster, up to the the max of 12. If you are performing scale out performance tests with multiple virtual machines or multiple vmdks, this policy is not recommended.
  • Object Space Reservation – This policy is recommended to encourage even distribution of components through a Virtual SAN cluster, by forcing the VSAN algorithms to take into account the full size of the object when making placement decisions. Using this policy is similar to a lazy zero thick disk format, and has no impact on performance outside of the influence on component placement and distribution.
  • Failures To Tolerate – We recommend keeping the failure to tolerate setting that adheres to the availability needs of your environment. The default is FTT=1.

Iometer Testing Parameters

In the “How to Super Charge Your Virtual SAN Cluster” blog post, we mentioned two differing Iometer configurations used. Below are the configurations used and the rationale behind them. If testing your Virtual SAN solution, we recommend these two test workloads as a baseline to level set the performance of your Virtual SAN solution, and sanity check the configuration and environment. Once these are complete, you may choose further testing using differing iometer configurations, application trace files, specific application workloads based on the application requirements of your environment, and level of testing you would like to perform.

  • 70/30 RW (80%random)–> Common Industry standard Ioprofile
    Recommend Test duration 2 hours, disregard results from the first hour to provide a warm up and achieve steady state performance
  • 100 RW (100% random)–> To achieve Max performance, max Iops, Sanity check SSD performance, Stress the network and validate network performance
    Recommended Test duration 1 hour, disregard results for first 30 mins to achieve warmup for steady state performance
  • OIO per host – We recommend configuring workers to not exceed an aggregate of 256 OIO total per host in scale out testing. The optimal number of OIO per worker will differ depending on the number of simultaneous workers per host you choose to utilize.
  • Block Size – We recommend configuring the block size to mimic the predominant application profile of your environment. This typically is 4K in most virtualized environments.

I/O Analyzer for Scale-Out Testing

I/O Analyzer allows for easy scale out testing of a Virtual SAN cluster. One “controller” I/O Analyzer VM can schedule tests on up to 512 I/O Analyzer VM’s (called workers) on up to 32 Hosts. We recommend reading the I/OAnalyzer Installation and User’s Guide before deploying the appliance. A few items specific to Virtual SAN testing to be noted are

  • For scale out testing of large Virtual SAN clusters, configure the I/O Analyzer Controller Virtual Machine with 2 vCPU and 4 Cores, and increase the memory to 32GB. We recommend placing the controller VM in separate host or management cluster, separate from the Virtual SAN cluster that will house the I/O Analyzer worker VMs. Ideally this management cluster will also be used to co-locate an out-of-band vCenter Server Appliance to run RVC and Virtual SAN Observer for performance analysis.
  • The I/O Analyzer Users Guide will say to deploy the appliance template and choose Thick provision Eager Zeroed, but this is not for Virtual SAN. Virtual SAN does not support Eager-Zero thick provisioning, as all objects are thin provisioned. Even when using the Object Space Reservation policy, the actual object is similar to a lazy-Zero format, so there is no pre-allocation or warming of the bits.
  • Enable Auto-Login on I/O Analyzer Worker VM – When an I/O Analyzer controller or worker reboots, it cannot be used until a user logs in. To do so, use the following procedure on the initial worker before cloning out additional worker I/O Analyzer VMs.

Rating: 5/5