Sep 01

NSX Design Guide for Small Data Centers

Executive Summary

VMware NSX is the network virtualization technology that decouples the networking services from the underlying physical infrastructure. VMware NSX allows for a new software based approach to networking that provides the same operational model as a virtual machines (VM). Virtual networks can easily be created, modified, backed-up and deleted within minutes.

By providing the physical networking constructs in software, VMware NSX provides similar benefits as server virtualization did with VMs. Businesses can see the impact in terms of increased efficiency, effective resources utilization, productivity, flexibility, agility and cost savings.

Document Structure

This document will present the audience with the NSX introduction, business use-cases and overview of design in Large and Medium data centers. The beginning of the document serves as a refresher to those who are already familiar with the NSX design and deployment aspects.
The document goes on to present a NSX for small data center, its relevance, and what are the main building blocks of designing NSX in small data centers.

The document talks about popular NSX deployment models in small data centers, gives details around protecting and designing based on the individual NSX components, like NSX ESG and DLR Control VM etc.
Towards the end of the document, it talks about the growth option to take NSX even further and grow it into the medium and large scale deployment.

Introduction

NSX has emerged as the leading software platform to virtualize network and networking services. Many customers have deployed NSX to run their production and non-production workload to get the benefits that
comes with virtual networks and software defined network approaches. NSX has been deployed from small to medium to large sizes of data centers to enable a wide-range of use-cases.

There are situations where large enterprises have also deployed NSX in their small data centers islands within the overall large environment. There are also situations where small and medium businesses (SMBs) are deploying NSX with small number of hosts to take advantage of network virtualization. Regardless of the size of the enterprise, small data center is a viable option and relevant for all type of customers, enterprises and businesses.

The NSX Reference Design Guide discusses design aspects to deploy NSX in all data center sizes. This document uses the NSX Reference Design Guide as a baseline and provides additional and/or supportive guidance to successfully run NSX in SMB Data Centers. It is assumed that readers have gone through the concepts and design options discussed in NSX reference design guide.

In addition, readers are highly encouraged to take a look at Software Defined Data Center (SDDC) VMware
Validated Design Guide (VVD) that provides most comprehensive and extensively tested blueprint to build and operated SDDC.

NSX Customer Use Cases

NSX has been widely accepted and deployed in production by many customers. Figure 1 lists some of the most important use cases that customers are deploying NSX for.

Packet-Header

Figure 1. – NSX Use Cases

Security

NSX can be used to create a secure infrastructure, which can create a zero-trust security model. Every virtualized workload can be protected with a full stateful firewall engine at a very granular level. Security can be based on constructs such as MAC, IP, ports, vCenter objects, security tags, active directory groups, etc. Intelligent dynamic security grouping can drive the self-adaptive security posture within the infrastructure.

Automation

VMware NSX provides a full RESTful API to consume networking, security and services, which can be used to drive automation within the infrastructure. In small data centers, automation tools like REST API and PowerNSX can be useful to programmatically configure network and security services, or to pull the information from VMware NSX deployments for simple operations tasks.

Application Continuity

NSX provides a way to easily extend networking and security up to eight vCenters either within or across data centers. NSX can extend or stretch L2 and L3 networks across data centers in distributed fashion. NSX also ensure that the security policies are consistent across those stretched networks and hence provide a seamless, distributed and available Network and Security overlay. All of it is done using software based technologies, without requiring expensive hardware.

NSX for vSphere Components

vSphere is the foundation for NSX for vSphere (referred to as NSX throughout this document) deployment. It is important to have good understanding of what vSphere and NSX components are involved into the design. For a successful NSX deployment, it is imperative to have a good vSphere deployment in place with proper vSphere clustering, compute, network and storage. For detailed discussions on these topics, the reader can refer to the NSX Reference Design Guide.

Figure 2 shows various layers of NSX for vSphere architecture based on the role being performed by each NSX components. From a very high level, the NSX solution architecture can be seen as divided between management, control and data planes. In the traditional networking model, the control and data plane is combined together.
NSX and other software defined networking architectures follow an approach where the data plane is separated from the control plane. This approach provides the advantage of decoupling from hardware dependencies, and allows all networking services to be virtualized following the same operational model that compute and storage virtualization has been providing for years.

Packet-Header

NSX layers for vSphere Architecture

NSX in Small Data Center Use-Cases

One must understand that Small Data Center (DC) does not mean that it is only relevant for small customers. Many large enterprises deploy NSX with small footprint or small number of ESXi resources in the beginning and then they expand to larger footprint. This could be due to number of different reasons for example budget, staffing or simply because of small scale deployment that they would have in the beginning. The advantage is that even if NSX is deployed in small footprint, it can easily grow into a medium or large size deployment.
On a broad scale, Small Data Center use-cases can be divided based on business function and application that are being deployed.

Functional Level Use Cases

Organizations deploy NSX with small footprint in specific functional areas or groups that they have. For instance

  • Disaster recovery and/or avoidance
  • Pre-Prod vs Test environments
  • Compliance / DMZ
  • Business units with their own operational model
  • Etc.

Application Level Use Cases

Many customers deploy NSX in small DCs to tackle one or more application level use-cases that they have. For instance

  • VDI
  • Load Balancer
  • Agentless Antivirus (AV)
  • Etc.

NSX Advantage for Small Data Centers

Organizations adopt NSX not just because of its technical strength and advantages that they gain while deploying networking services in software. They also get the advantage in terms of its simplicity, ease of use and operational flexibility. Some of these advantages are highlighted here.

Simplicity and Modularity

Small customers like the idea of its simplicity and modularity, where they have peace of mind to grow and add more features as they increase the capacity or user base. They do not need to purchase all the networking hardware upfront with lots of unknown down the road. NSX provides those customers software based networking services that they can spin up anytime they want without incurring additional hardware cost.

Procurement

Customer are also thrilled because all the networking and security services are bundled within the same product and platform, so they do not need to worry about contacting multiple vendors not just for purchase but also for support agreement and licenses procurement and cost. Customer are getting everything with the NSX under one roof.

Ease of Operations

Majority of the customers are already familiar with the operational model vSphere has provided them for years. NSX is seamlessly integrated within the same model. It enhances their operational model and sits nicely on top of it. Hence the learning curve to adopt the new technology is minimal.

Download NSX Design Guide for Small Data Centers.

Rating: 5/5


Jun 13

Announcing the What’s New in vSphere 6.7 Whitepaper

By Adam Eckerle

With the recent announcement and general availability of vSphere 6.7 we’ve seen an immense amount of interest. With each new version of vSphere we continue to see customers start their testing of new releases earlier and earlier in the release cycle. vSphere 6.7 brings a number of important new features that vSphere Administrators as well architects and business leaders are excited about.

vSphere 6.7 focuses on simplifying management at scale, securing both infrastructure and workloads, being the universal platform for applications, and providing a seamless hybrid cloud experience. Features such as Enhanced Linked Mode with embedded Platform Services Controllers bring simplicity back to vCenter Server architecture. Support for TPM 2.0 and Virtualization Based Security provide organizations with a secure platform for both infrastructure and workloads. The addition of support for RDMA over Converged Ethernet v2 (RoCE v2), huge pages, suspend/resume for vGPU workloads, persistent memory, and native 4k disks makes shows that the hypervisor is not a commodity and vSphere 6.7 enables more functionality and better performance for more applications.

For those wanting a deep dive into the new features and functionality, I’m happy to announce the availability of the What’s New in vSphere 6.7 whitepaper. This paper is a consolidated resource that discusses and illustrates the key new features of vSphere 6.7 and their value to vSphere customers. The What’s New with vSphere 6.7 whitepaper can be found on the vSphere product page in the Resources section or can be downloaded directly here. After reading through this paper you should have a very good grasp on the key new features and how they will help your infrastructure and business.

Finally, we have a new collection of vSphere 6.7 resources on vSphere Central to make setting up and using these new features even easier. There are also some walkthroughs on upgrading. You can see all of the currently available resources on the vSphere 6.7 Technical Assets page.

Download What’s New in vSphere 6.7 Whitepaper.

About the Author

Adam Eckerle manages the vSphere Technical Marketing team in the Cloud Platform Business Unit at VMware. This team is responsible for vSphere launch, enablement, and ongoing content generation for the VMware field, Partners, and Customers. In addition, Adam’s team is also focused on preparing Customers and Partners for vSphere upgrades through workshops, VMUGs, and other events.

Rating: 5/5


Apr 17

vSAN 6.7 Technical Overview

This video introduces VMware’s Software Designed Enterprise Class Storage Solution vSAN. vSAN powers industry-leading Hyper-Converged Infrastructure solutions with a vSphere-native, high-performance architecture.

NOTE: This video is roughly 30 minutes in length so it would be worth blocking out some time to watch it!

Rating: 5/5


Apr 17

What’s new with vSphere 6.7 Core Storage

By Jason Massae
VXLAN Components

What’s new with vSphere 6.7 Core Storage


Announced today, vSphere 6.7, and several new features and enhancements to further the advancement of storage functionality are included. Centralized, shared storage remains the most common storage architecture used with VMware installations despite the incredible adoption rate of HCI and vSAN. As such, VMware remains committed to the continued development of core storage and Virtual Volumes, and with the release of vSphere 6.7, this truly shows. The 6.7 version marks a major vSphere release, with many new capabilities to enhance the customer experience. From space reclamation to supporting Microsoft WSFC on VVols, this release is definitely feature rich! Below are summaries of what is included in vSphere 6.7, and you can find more detail on each feature on the VMware storage and availability technical document repository: StorageHub.

Configurable Automatic UNMAP

Automatic UNMAP was released with vSphere 6.5 with a selectable priority of none or low. Storage vendors and customers have requested higher, configurable rates rather than a fixed 25MBps. With vSphere 6.7 we’ve added a new method, “fixed” which allows you to configure an automatic UNMAP rate between 100MBps and 2000MBps, configurable both in the UI and CLI.

VXLAN Components

Configurable Automatic UNMAP

UNMAP for SESparse

SESparse is a sparse virtual disk format used for snapshots in vSphere as a default for VMFS-6. In this release, we are providing automatic space reclamation for VM’s with SESparse snapshots on VMFS-6. This only works when the VM is powered on and only affect the top-most snapshot.

Support for 4K native HDD

Customers may now deploy ESXi on servers with 4Kn HDDs used for local storage (SSD and NVMe drives are currently not supported). We are providing a software read-modify-write layer within the storage stack allowing the emulation of 512B sector drives. ESXi continues to expose 512B sector VMDKs to the guest OS. Servers having UEFI BIOS can boot from 4Kn drives.

XCOPY enhancement

XCOPY is used to offload storage-intensive operations such as copying, cloning, and zeroing to the storage array instead of the ESXi host. With the release of vSphere 6.7, XCOPY will now work with specific vendor VAAI primitives and any vendor supporting the SCSI T10 standard. Additionally, XCOPY segments and transfer sizes are now configurable.

VXLAN Components

XCOPY enhancement

VVols enhancements

As VMware continues the development of Virtual Volumes, in this release we have added support for IPv6 and SCSI-3 persistent reservations. With end to end support of IPv6, this enables organizations, including government, to implement VVols using IPv6. With SCSI-3 reservations, this substantial feature allows shared disks/volumes between virtual machines across nodes/hosts. Often used for Microsoft WSFC clusters, with this new enhancement it allows for the removal of RDMs!

Increased maximum number of LUNs/Paths (1K/4K LUN/Path)

The maximum number of LUNs per host is now 1024 instead of 512 and the maximum number of paths per host is 4096 instead of 2048. Customers may now deploy virtual machines with up to 256 disks using PVSCSI adapters. Each PVSCSI adapter can support up to 64 devices. Devices can be virtual disks or RDMs. A major change in 6.7 is the increased number of LUNs supported for Microsoft WSFC clusters. The number increased from 15 disks to 64 disks per adapter, PVSCSI only. This changes the number of LUNs available for a VM running MICROSOFT WSFC from 45 to 192 LUNs.

VMFS-3 EOL

Starting with vSphere 6.7, VMFS-3 will no longer be supported. Any volume/datastore still using VMFS-3 will automatically be upgraded to VMFS-5 during the installation or upgrade to vSphere 6.7. Any new volume/datastore created going forward will use VMFS-6 as the default.

VXLAN Components

VMFS-3 EOL

Support for PMEM /NVDIMMs

Persistent Memory or PMem is a type of non-volatile DRAM (NVDIMM) that has the speed of DRAM but retains contents through power cycles. It’s a new layer that sits between NAND flash and DRAM providing faster performance and it’s non-volatile unlink DRAM.

VXLAN Components

Support for PMEM /NVDIMMs

Intel VMD (Volume Management Device)

With vSphere 6.7, there is now native support for Intel VMD technology to enable the management of NMVe drives. This technology was introduced as an installable option in vSphere 6.5. Intel VMD currently enables hot-swap management, as well as NVMe drive, LED control allowing similar control used for SAS and SATA drives.

Intel VMD (Volume Management Device)

Intel VMD (Volume Management Device)

RDMA (Remote Direct Memory Access) over Converged Ethernet (RoCE)

This release introduces RDMA using RoCE v2 support for ESXi hosts. RDMA provides low latency, and higher-throughput interconnects with CPU offloads between the end-points. If a host has RoCE capable network adaptor(s), this feature is automatically enabled.

VXLAN Components

RDMA (Remote Direct Memory Access) over Converged Ethernet (RoCE)

Para-virtualized RDMA (PV-RDMA)

In this release, ESXi introduces the PV-RDMA for Linux guest OS with RoCE v2 support. PV-RDMA enables customers to run RDMA capable applications in the virtualized environments. PV-RDMA enabled VMs can also be live migrated.

iSER (iSCSI Extension for RDMA)

Customers may now deploy ESXi with external storage systems supporting iSER targets. iSER takes advantage of faster interconnects and CPU offload using RDMA over Converged Ethernet (RoCE). We are providing iSER initiator function, which allows ESXi storage stack to connect with iSER capable target storage systems.

SW-FCoE (Software Fiber Channel over Ethernet)

In this release, ESXi introduces software-based FCoE (SW-FCoE) initiator than can create FCoE connection over Ethernet controllers. The VMware FCoE initiator works on lossless Ethernet fabric using Priority-based Flow Control (PFC). It can work in Fabric and VN2VN modes. Please check VMware Compatibility Guide (VCG) for supported NICs.

VXLAN Components

SW-FCoE (Software Fiber Channel over Ethernet)

It is plain to see why vSphere 6.7 is such a major release with so many new storage-related improvements and features. These are just highlights, more detail may be found by heading over to StorageHub and review the vSphere 6.7 Core Storage section.

Download vSphere 6.7 Core Storage.

About the Author

Jason is the Core Storage Technical Marketing Architect for the Storage and Availability Business Unit at VMware. Before joining VMware, he came from one of the largest flash and memory manufactures in the world. There he architected and lead global teams in virtualization strategies for IT. Also working with the storage business unit, he helped test and validate SSDs for VMware and vSAN. Now his primary focus is core storage for vSphere and vSAN.

Rating: 5/5


May 15

vSphere 6.5 Upgrade Considerations Part-1

Emad Younis posted May 15, 2017.
The release of vSphere 6.5 in November 2016 introduced many new features and enhancements. These include the vCenter Server Appliance (VCSA) now becoming the default deployment. vCenter Server native high availability, which protects vCenter Server from application failure. Built-in File-Based backup and restore allows customers the ability to backup their vCenter Server from the VAMI or by API. The VCSA restore can simply be done by mounting the original ISO used to deploy the VCSA and selecting the restore option. These features and more are exclusive only to the vCenter Server Appliance. The new HTML5 vSphere Client is making its first official product debut with vSphere 6.5.

Did someone say security? We now have better visibility of vSphere changes with actionable logging. VM Encryption allows the encryption of a virtual machine, including disks and snapshots. Secure Boot for ESXi ensures that only digitally signed code runs on the hypervisor. Secure Boot for VM’s is as simple as checking a box. We’ve only begun to scratch the surface of all the new vSphere 6.5 features.

Packet-Header

vCenter-6.5-Features

Product Education

As you start preparing for your vSphere 6.5 upgrade, a checklist will be the run book used to ensure its success. The upgrade process can be divided into three phases:

Phase 1: Pre-upgrade – all the upfront work that should be done before starting an upgrade.
Phase 2: Upgrade – mapping the steps of each component that will be upgraded.
Phase 3: Post-upgrade – validation to ensure everything went according to plan.

The first part of any successful upgrade is determining the benefits of the new features and the value add they will provide to your business. Next is getting familiar with these new features and how they will be implemented in your environment. The following list will get you started learning each of the new vSphere 6.5 features and their benefits.

Another consideration to getting familiar with the new features and upgrade process is the hands on approach in a lab environment. If you have a lab environment at your disposal, try building it as close to your production environment as possible to simulate both the upgrade process and new feature implementation. If a lab environment is not available, there are options like VMware’s Workstation or Fusion if you have the resources to run them. Last, but not least, there is also the Hands on Labs that do not require any resources and provide a guided approach. No matter which option you select, the key is getting familiar and comfortable with the upgrade process.

Health Assessment

crn-products-of-the-year-2016-400.jpg Doing a health assessment of your current environment is critical. Nothing is worse than being in the middle of an upgrade and having to spending hours troubleshooting an issue only to find out it was related to a misconfiguration with something as simple as DNS or NTP. Another advantage to doing a health assessment is discovering wasted resources. For example, virtual machines that are no longer needed but have yet to be decommissioned. The health assessment should cover all components (Compute, Storage, Network, 3rd party) that interact with your vSphere environment. Please consult with your compute, storage, and network vendors for health assessment best practices and tools. Environmental issues are high on the list when it comes to upgrade show stoppers. The good news is that they can be prevented.

There are also VMware and community tools that can help by providing reports on your current environment. Most of these tools come with a 60-day evaluation period, which is enough time to get the information needed. When using community tools please keep in mind they are not officially supported by VMware. Finally, there is also the VMware vSphere health check done by a certified member of VMware’s professional services team. Check with your VMware representative for more information.

Conducting the health assessment could lead to discovering an issue that requires the help of support and opening a ticket. Do not proceed with the upgrade until all open support tickets have been resolved. There are instances where an issue can be fixed by applying a patch or an update, but make sure that any environmental problems have completely been resolved prior to proceeding. This not only includes VMware support tickets, but also compute, storage, network, and 3rd party that interact with your vSphere environment.

Important Documents

Now that we’ve learned about the features and completed a health assessment of our current vSphere environment, it’s time to start mapping out the upgrade process. The first step is looking at important documents like the vSphere 6.5 documentation, product release notes, knowledge base articles, and guides. Each of these documents have pieces of information which are vital to ensuring a successful upgrade. Product release notes, for example, provide information such as what’s new but also information about upgrades, any known issues, and all key pieces of information. Reading the vSphere 6.5 upgrade guide will give you an understanding of the upgrade process. The VMware compatibility guide and Product interoperability matrices will ensure components and upgrade paths are supported. Here is a breakdown of the important vSphere 6.5 documentation that should be viewed prior to upgrading.

Packet-Header

vSphere 6.5Documents

Product Release Notes

Knowledge Base Articles

Guides

Documentation

Upgrades need to be done with a holistic view from the hardware layer all the way to the application layer. With this philosophy in mind, a successful upgrade requires advance prep work to be done to avoid any potential roadblocks. Things like health assessments shouldn’t only be done when preparing for an upgrade, but also routinely. Think of it as a doctor’s visit for your environment and getting a clean bill of health. vSphere 6.5 has been released now for six months and since then four patches are now available providing bug fixes and product updates. The HTML5 vSphere Client now has added features in the release of vSphere 6.5.0 patch b and vSAN easy install is available in 6.5.0 patch d. This agile release of patches means customers no longer need to wait on the first update to consider upgrading to vSphere 6.5. The next few blog posts in this series will cover mapping out the upgrade process whiteboard style, architecture considerations for the vSphere Single Sign-On domain, migration, and upgrade paths.

At this point it is worth noting that the vSphere upgrade process can seem complex if not overwhelming, especially for our customers who use other tools that depend on vSphere and vCenter Server. We hear you. VMware is certainly working to make this better. I hope to be able to write about those improvements in the future. Until then you have upgrade homework to do!

About the Author

Emad Younis is a Staff Technical Marketing Architect and VCIX 6.5-DCV working in the Cloud Platform Business Unit, part of the R&D organization at VMware. He currently focuses on the vCenter Server Appliance, vCenter Server Migrations, and VMware Cloud on AWS. His responsibilities include generating content, evangelism, collecting product feedback, and presenting at events. Emad can be found blogging on emadyounis.com or on Twitter via @emad_younis.

Rating: 5/5


Mar 11

DRS Performance in VMware vSphere 6.5

Introduction

VMware vSphere® Distributed Resource Scheduler™ (DRS) is more than a decade old and is constantly innovating with every new version. In vSphere 6.5, DRS comes with many new features and performance improvements to ensure more efficient load balancing and VM placement, faster response times, and simplified cluster management.
In this paper, we cover some of the key features and performance improvements to highlight the more efficient, faster, and lighter DRS in vSphere 6.5.

New Features

Predictive DRS
Historically, vSphere DRS has been reactive—it reacts to any changes in VM workloads and migrates the VMs to distribute load across different hosts. In vSphere 6.5, with VMware vCenter Server® working together with VMware vRealize® Operations™ (vROps), DRS can act upon predicted future changes in workloads. This helps DRS migrate VMs proactively and makes room in the cluster to accommodate future workload demand.
For example, if your VMs’ workload is going to spike at 9am every day, predictive DRS will be able to detect this pattern before-hand based on historical data from vROPs, and can prepare the cluster resources by using either of the following techniques:

  • Migrating the VMs to different hosts to accommodate the future workload and avoid host over-commitment
  • Bringing back a new host from stand-by mode using VMware vSphere® Distributed Power Management™ (DPM) to accommodate the future demand

How It Works

To enable predictive DRS, you need to link vCenter Server to a vROps instance (that supports predictive DRS), which monitors the resource usage pattern of VMs and generates predictions. Once vROps starts monitoring VM workloads, it generates predictions after a specified learning period. The generated predictions are then provided to vCenter Server for DRS to consume.

Once the VMs’ workload predictions are available, DRS evaluates the demand of a VM based on its current resource usage and predicted future resource usage.

    Demand of a VM = Max (current usage, predicted future usage)

Considering the maximum of current and future resource usage ensures that DRS does not clip any VM’s current demand in favor of its future demand. For the VMs which do not have predictions, DRS computes resource
demand based on only the current resource usage.

Look Ahead Interval

The predictions that DRS gets from vROps are always for a certain period of time, starting from the current
time. This period is known as the “look ahead interval” for predictive DRS. This is by default 60 minutes starting from the current time, which means, by default the predictions will always be for the next one hour. So if there is any sudden spike that is going to happen in the next one hour, predictive DRS will detect it and will prepare the cluster to handle it.

Network-Aware DRS

Traditionally, DRS has always considered the compute resource (CPU and memory) utilizations of hosts and VMs for balancing load across hosts and placing VMs during power-on. This generally works well because in many cases, CPU and memory are the most important resources needed for good application performance.
However, since network availability is not considered in this approach, sometimes this results in placing or
migrating a VM to a host which is already network saturated. This might have some performance impact on the application if it happens to be network sensitive.
DRS is network-aware in vSphere 6.5, so it now considers the network utilization of host and network usage requirements of VMs during initial placement and load balancing. This makes DRS load balancing and initial placement of VMs more effective.

How It Works

During initial placement and load balancing, DRS first comes up with the list of best possible hosts to run a VM based on compute resources and then uses some heuristics to decide the final host based on VM and host network utilization’s. This makes sure the VM gets the network resources it needs along with the compute resources.

The goal of network-aware DRS in vSphere 6.5 is only to make sure the host has sufficient network resources available along with compute resources required by the VM. So, unlike regular DRS, which balances the CPU and memory load, network-aware DRS does not balance the network load in the cluster, which means it will not trigger a vMotion when there is network load imbalance.

Download

Download a full DRS PERFORMANCE in VMware vSphere 6.5 Study Guide.

Rating: 5/5


Apr 10

VMware VSAN 6.2 for ESXi 6.0 with Horizon View Technical Whitepaper

Executive summary

VMware Virtual SAN is a software defined storage solution introduced by VMware in 2012 that allows you
to create a clustered data store from the storage (SSDs and HDDs, or all-flash using SSDs and PCIeSSDs) that is present in the ESXi hosts. The Virtual SAN solution simplifies storage management through objectbased storage systems and fully supports vSphere enterprise features such as HA, DRS and vMotion. The Virtual SAN storage cluster must be made up of at least three ESXi servers. VMware Virtual SAN is built into the ESXi 6.0 hypervisor and can be used with ESXi hosts that are configured with PERC RAID controllers.

To be able to use Virtual SAN in a hybrid configuration, which is the context of this document, you will need at least one SSD and one HDD in each of the servers participating in the Virtual SAN cluster and it’s important to note that the SSD doesn’t contribute to the storage capacity. The SSDs are used for read cache and write buffering whereas the HDD’s are there to offer persistent storage. Virtual SAN is highly available as it’s based on the distributed object-based RAIN (redundant array of independent nodes) architecture. Virtual SAN is fully integrated with vSphere. It aims to simplify storage placement decisions for vSphere administrators and its goal is to provide both high availability as well as scale out storage functionality.

Download

Download out the full VMware VSAN 6.2 for ESXi 6.0 with Horizon View Technical Whitepaper

Rating: 5/5