Apr 17

What’s new with vSphere 6.7 Core Storage

By Jason Massae

VXLAN Components

What’s new with vSphere 6.7 Core Storage

Announced today, vSphere 6.7, and several new features and enhancements to further the advancement of storage functionality are included. Centralized, shared storage remains the most common storage architecture used with VMware installations despite the incredible adoption rate of HCI and vSAN. As such, VMware remains committed to the continued development of core storage and Virtual Volumes, and with the release of vSphere 6.7, this truly shows. The 6.7 version marks a major vSphere release, with many new capabilities to enhance the customer experience. From space reclamation to supporting Microsoft WSFC on VVols, this release is definitely feature rich! Below are summaries of what is included in vSphere 6.7, and you can find more detail on each feature on the VMware storage and availability technical document repository: StorageHub.

Configurable Automatic UNMAP

Automatic UNMAP was released with vSphere 6.5 with a selectable priority of none or low. Storage vendors and customers have requested higher, configurable rates rather than a fixed 25MBps. With vSphere 6.7 we’ve added a new method, “fixed” which allows you to configure an automatic UNMAP rate between 100MBps and 2000MBps, configurable both in the UI and CLI.

VXLAN Components

Configurable Automatic UNMAP

UNMAP for SESparse

SESparse is a sparse virtual disk format used for snapshots in vSphere as a default for VMFS-6. In this release, we are providing automatic space reclamation for VM’s with SESparse snapshots on VMFS-6. This only works when the VM is powered on and only affect the top-most snapshot.

Support for 4K native HDD

Customers may now deploy ESXi on servers with 4Kn HDDs used for local storage (SSD and NVMe drives are currently not supported). We are providing a software read-modify-write layer within the storage stack allowing the emulation of 512B sector drives. ESXi continues to expose 512B sector VMDKs to the guest OS. Servers having UEFI BIOS can boot from 4Kn drives.

XCOPY enhancement

XCOPY is used to offload storage-intensive operations such as copying, cloning, and zeroing to the storage array instead of the ESXi host. With the release of vSphere 6.7, XCOPY will now work with specific vendor VAAI primitives and any vendor supporting the SCSI T10 standard. Additionally, XCOPY segments and transfer sizes are now configurable.

VXLAN Components

XCOPY enhancement

VVols enhancements

As VMware continues the development of Virtual Volumes, in this release we have added support for IPv6 and SCSI-3 persistent reservations. With end to end support of IPv6, this enables organizations, including government, to implement VVols using IPv6. With SCSI-3 reservations, this substantial feature allows shared disks/volumes between virtual machines across nodes/hosts. Often used for Microsoft WSFC clusters, with this new enhancement it allows for the removal of RDMs!

Increased maximum number of LUNs/Paths (1K/4K LUN/Path)

The maximum number of LUNs per host is now 1024 instead of 512 and the maximum number of paths per host is 4096 instead of 2048. Customers may now deploy virtual machines with up to 256 disks using PVSCSI adapters. Each PVSCSI adapter can support up to 64 devices. Devices can be virtual disks or RDMs. A major change in 6.7 is the increased number of LUNs supported for Microsoft WSFC clusters. The number increased from 15 disks to 64 disks per adapter, PVSCSI only. This changes the number of LUNs available for a VM running MICROSOFT WSFC from 45 to 192 LUNs.

VMFS-3 EOL

Starting with vSphere 6.7, VMFS-3 will no longer be supported. Any volume/datastore still using VMFS-3 will automatically be upgraded to VMFS-5 during the installation or upgrade to vSphere 6.7. Any new volume/datastore created going forward will use VMFS-6 as the default.

VXLAN Components

VMFS-3 EOL

Support for PMEM /NVDIMMs

Persistent Memory or PMem is a type of non-volatile DRAM (NVDIMM) that has the speed of DRAM but retains contents through power cycles. It’s a new layer that sits between NAND flash and DRAM providing faster performance and it’s non-volatile unlink DRAM.

VXLAN Components

Support for PMEM /NVDIMMs

Intel VMD (Volume Management Device)

With vSphere 6.7, there is now native support for Intel VMD technology to enable the management of NMVe drives. This technology was introduced as an installable option in vSphere 6.5. Intel VMD currently enables hot-swap management, as well as NVMe drive, LED control allowing similar control used for SAS and SATA drives.

Intel VMD (Volume Management Device)

Intel VMD (Volume Management Device)

RDMA (Remote Direct Memory Access) over Converged Ethernet (RoCE)

This release introduces RDMA using RoCE v2 support for ESXi hosts. RDMA provides low latency, and higher-throughput interconnects with CPU offloads between the end-points. If a host has RoCE capable network adaptor(s), this feature is automatically enabled.

VXLAN Components

RDMA (Remote Direct Memory Access) over Converged Ethernet (RoCE)

Para-virtualized RDMA (PV-RDMA)

In this release, ESXi introduces the PV-RDMA for Linux guest OS with RoCE v2 support. PV-RDMA enables customers to run RDMA capable applications in the virtualized environments. PV-RDMA enabled VMs can also be live migrated.

iSER (iSCSI Extension for RDMA)

Customers may now deploy ESXi with external storage systems supporting iSER targets. iSER takes advantage of faster interconnects and CPU offload using RDMA over Converged Ethernet (RoCE). We are providing iSER initiator function, which allows ESXi storage stack to connect with iSER capable target storage systems.

SW-FCoE (Software Fiber Channel over Ethernet)

In this release, ESXi introduces software-based FCoE (SW-FCoE) initiator than can create FCoE connection over Ethernet controllers. The VMware FCoE initiator works on lossless Ethernet fabric using Priority-based Flow Control (PFC). It can work in Fabric and VN2VN modes. Please check VMware Compatibility Guide (VCG) for supported NICs.

VXLAN Components

SW-FCoE (Software Fiber Channel over Ethernet)

It is plain to see why vSphere 6.7 is such a major release with so many new storage-related improvements and features. These are just highlights, more detail may be found by heading over to StorageHub and review the vSphere 6.7 Core Storage section.

Download vSphere 6.7 Core Storage.

About the Author

Jason is the Core Storage Technical Marketing Architect for the Storage and Availability Business Unit at VMware. Before joining VMware, he came from one of the largest flash and memory manufactures in the world. There he architected and lead global teams in virtualization strategies for IT. Also working with the storage business unit, he helped test and validate SSDs for VMware and vSAN. Now his primary focus is core storage for vSphere and vSAN.

Rating: 5/5


Jun 11

NSX-v 6.2.x – Security Hardening Guide

Created by RobertoMari on Oct 12,2014 5:22 PM. Last modified by vwade on Jun 10, 2016 2:40 PM

VMware NSX Hardening Guide Authors: Pravin Goyal, Greg Christopher, Michael Haines, Roberto Mari, Kausum Kumar, Wade Holmes

This is the Version 1.6 of the VMware® NSX for vSphere Hardening Guide.

This guide provides prescriptive guidance for customers on how to deploy and operate VMware® NSX in a secure manner.

Acknowledgements to the following contributors for reviewing and providing feedback to various sections of the document: Kausum Kumar, Roberto Mari, Scott Lowe, Ben Lin, Bob Motanagh, Dmitri Kalintsev, Greg Frascadore, Hadar Freehling, Kiran Kumar Thota, Pierre Ernst, Rob Randell, Roie Ben Haim, Yves Fauser

Guide is provided in an easy to consume spreadsheet format, with rich metadata (i.e. similar to existing VMware vSphere Hardening Guides) to allow for guideline classification and risk assessment.

Feedback and Comments to the Authors and the NSX Solution Team can be posted as comments to this community Post (Note: users must login on vmware communities before posting a comment).

Download

Download a full NSX-v Security Hardering Guide

Rating: 5/5


Jun 10

VMware vCenter Server™ 6.0 Deployment Guide

Introduction

The VMware vCenter Server™ 6.0 release introduces new, simplified deployment models. The components that make up a vCenter Server installation have been grouped into two types: embedded and external. Embedded refers to a deployment in which all components—this can but does not necessarily include the database—are installed on the same virtual machine. External refers to a deployment in which vCenter Server is installed on one virtual machine and the Platform Services Controller (PSC) is installed on another. The Platform Services Controller is new to vCenter Server 6.0 and comprises VMware vCenter™ Single Sign-On™, licensing, and the VMware Certificate Authority (VMCA).

Embedded installations are recommended for standalone environments in which there is only one vCenter Server system and replication to another Platform Services Controller is not required. If there is a need to replicate with other Platform Services Controllers or there is more than one vCenter Single Sign-On enabled solution, deploying the Platform Services Controller(s) on separate virtual machine(s)—via external deployment—from vCenter Server is required.

This paper defines the services installed as part of each deployment model, recommended deployment models (reference architectures), installation and upgrade instructions for each reference architecture, postdeployment steps, and certificate management in VMware vSphere 6.0.

VMware vCenter Server 6.0 Services

vCenter Server and Platform Services Controller Services

Figure 1 – vCenter Server and Platform Services Controller Services

Requirements

General
A few requirements are common to both installing vCenter Server on Microsoft Windows and deploying VMware vCenter Server Appliance™. Ensure that all of these prerequisites are in place before proceeding with a new installation or an upgrade.

  • DNS – Ensure that resolution is working for all system names via fully qualified domain name (FQDN), short name (host name), and IP address (reverse lookup).
  • Time – Ensure that time is synchronized across the environment.
  • Passwords – vCenter Single Sign-On passwords must contain only ASCII characters; non-ASCII and extended (or high) ASCII characters are not supported.

Windows Installation

Installing vCenter Server 6.0 on a Windows Server requires a Windows 2008 SP2 or higher 64-bit operating system (OS). Two options are presented: Use the local system account or use a Windows domain account. With a Windows domain account, ensure that it is a member of the local computer’s administrator group and that it has been delegated the “Log on as a service” right and the “Act as part of the operating system” right. This option is not available when installing an external Platform Services Controller.

Windows installations can use either a supported external database or a local PostgreSQL database that is installed with vCenter Server and is limited to 20 hosts and 200 virtual machines. Supported external databases include Microsoft SQL Server 2008 R2, SQL Server 2012, SQL Server 2014, Oracle Database 11g, and Oracle Database 12c. When upgrading to vCenter Server 6.0, if SQL Server Express was used in the previous installation, it will be replaced with PostgreSQL. External databases require a 64-bit DSN. DSN aliases are not supported.

When upgrading vCenter Server to vCenter Server 6.0, only versions 5.0 and later are supported. If the vCenter Server system being upgraded is not version 5.0 or later, such an upgrade is required first.

Table 2 outlines minimum hardware requirements per deployment environment type and size when using an external database. If VMware vSphere Update Manager™ is installed on the same server, add 125GB of disk space and 4GB of RAM.

Minimum Hardware Requirements – Windows Installation

Table 2. Minimum Hardware Requirements – Windows Installation

Download

Download a full VMware vCenter Server™ 6.0 Deployment Guide

Rating: 5/5


Jun 07

NSX F5 Design Guide

Created by ddesmidt on May 7, 2015 1:19 PM. Last modified by ddesmidt on May 7, 2015 1:29 PM.
Version 2

Intended Audience

The intended audience for this document includes virtualization and network architects seeking to deploy VMware® NSX™ for vSphere® in combination with F5® BIG-IP® Local Traffic Manager™ devices.
Note: A solid understanding based on hands-on experience with both NSX-v and F5 BIG-IP LTM is a pre-requisite to successfully understanding this design guide.

NSX deployments can be today coupled with F5 BIG-IP appliances or Virtual Edition.
Such deployment gives to NSX customers a flexible, powerful, and agile infrastructure with the richness of F5 ADC service.
Note: F5 deployment + configuration done from F5.

Overview

The Software Defined Data Center is defined by server virtualization, storage virtualization and network virtualization and server virtualization has already proved the value of SDDC architectures in reducing costs and complexity of compute infrastructure. VMware NSX network virtualization provides the third critical pillar of the SDDC and extends the same benefits to the data center network to accelerate network service provisioning, simplify network operations and improve network economics.

VMware NSX-v is the leading network virtualization solution in the market today and is being deployed across all vertical markets and market segments. NSX reproduces L2-L7 networking and security including L2 Switching, L3 Routing, Firewalling, Load Balancing, and IPSEC/VPN secure access. services completely in software and allows programmatic provisioning and management of these services. More information about these functions is available in the NSX Design Guide.

F5 BIG-IP is the leading application delivery controller in the market today. The BIG-IP product family provides Software-Defined Application Services™ (SDAS) designed to improve the performance, reliability and security of mission-critical applications. BIG-IP is available in a variety of form factors, ranging from ASIC-based physical appliances to vSphere-based virtual appliances. NSX deployments can be coupled with F5 BIG-IP appliances or Virtual Edition form factors.

Furthermore, F5 offers a centralized management and orchestration platform called BIG-IQ.
By deploying BIG-IP and NSX together, organizations are able to achieve service provisioning automation and agility enabled by the SDDC combined with the richness of the F5 application delivery services they have come to expect.
This design guide provides recommended practices and topologies to optimize interoperability between the NSX platform and F5 BIG-IP physical and virtual appliances. This interoperability design guide is intended for those customers who would like to adopt the SDDC while ensuring compatibility and minimal disruption to their existing BIGIP environment. The Recommended practice guide will provide step-by-step guidance to implement the topologies outlined in this document.

NSX/F5 Topology Options

“BIG-IP Form Factor” / “NSX overlay or not” / “BIG-IP placement” Relationships

There are about 20 possible topologies that can be used when connecting BIG-IP to an NSX environment but this Design Guide will focus on the three that best represent the form factor, connection method, and logical topology combinations. In addition, the Design Guide will highlight the Pros and Cons of each of the three topologies.

The following figure describes the relationship of:

  • BIG-IP form factor:
    o BIG-IP Virtual Edition (“VE”)
    o BIG-IP physical appliance
  • With NSX overlay/Without NSX overlay:
    o VXLAN
    o non-VXLAN (VLAN tagged on untagged)

  • BIG-IP placement:
  • o BIG-IP parallel to NSX Edge
    o BIG-IP parallel to DLR
    o BIG-IP One-Arm connected to server network(s)
    o BIG-IP on top of NSX Edge
    o BIG-IP on top of NSX DLR

“BIG-IP Form Factor” / “NSX overlay or not” / “BIG-IP placement” Relationships

Figure 1 – “BIG-IP Form Factor” / “NSX overlay or not” / “BIG-IP placement” Relationships

This design guide provides recommended practices and topologies to optimize interoperability between the NSX platform and F5 BIG-IP physical and virtual appliances.

Download NSX F5 Design Guide v1.6


May 10

Virtual SAN Hardware Quick Reference Guide

Overview

The purpose of this document is to provide sample server configurations as directional guidelines for use with VMware® Virtual SAN™. Use these guidelines as your first step toward determining the configuration for Virtual SAN.

How to use this document

1. Determine your workload profile requirement for your use case.
2. Refer to Ready Node profiles to determine the approximate configuration that meets your needs.
3. Use VSAN Hardware Compatibility Guide to pick a Ready Node aligned with the selected profile from the OEM server vendor of choice.

Additional Resources

For more detail on Virtual SAN Design guidance, see
1. Virtual SAN Ready Node Configurator
2. Virtual SAN Hardware Guidance
3. VMware® Virtual SAN™ 6.0 Design and Sizing Guide.
4. Virtual SAN Sizing Calculator
5. VSAN Assessment Tool

Download

Download out the full Virtual SAN Hardware Quick Reference Guide technical white paper.

Rating: 5/5


Apr 23

NSX-v Operations Guide

Purpose

This guide shows how to perform day-to-day management of an NSX for vSphere (“NSX-v”) deployment. This information can be used to help plan and carry out operational monitoring and management of your NSX-v implementation.
To monitor physical network operations, administrators have traditionally collected various types of data from the devices that provide network connectivity and services. Broadly the data can be categorized as:

    Statistics and events
    ■ Flow level data
    ■ Packet level data

Monitoring and troubleshooting tools use the above types of data and help administrators manage and operate networks. Collectively, these types of information are referred to as “network and performance monitoring and diagnostics” (NPMD) data. The diagram below summarizes the types of NPMD data and the tools that consume this information.

NPMD data diagram

NPMD data diagram

The tools used for monitoring physical networks can be used to monitor virtual networks as well. Using standard protocols, the NSX platform provides network monitoring data similar to that provided by physical devices, giving administrators a clear view of virtual network conditions.
In this document, we’ll describe how an administrator can monitor and retrieve network statistics, network flow information, packet information, and NSX system events.

Audience

This document is intended for those involved in the configuration, maintenance, and administration of VMware NSX-v. The intended audience includes the following business roles:

    – Architects and planners responsible for driving architecture-level decisions.
    – Security decision makers responsible for business continuity planning.
    – Consultants, partners, and IT personnel, who need the knowledge for deploying the solution.

This guide is written with the assumption that an administrator who will use these procedures is familiar with VMware vSphere and NSX-v, and we assume the reader has as strong networking background. For detailed explanations of NSX-v concepts and terminology, please refer to the NSX for vSphere documentation website.

Scope

This guide covers NSX-v and its integration with core VMware technologies such as vSphere and Virtual Distributed Switch (vDS). It does not attempt to cover architectural design decisions or installation. Also, while there are third-party integrations and extensive APIs available to programmatically program and manage NSX, this document does not focus on APIs or third-party integration including other VMware products. We do mention specific APIs when they offer a recommended or efficient method for configuring NSX, and when there is no direct UI function available to perform the desired action.

Download

Download out the full NSX-v Operations Guide, rev 1.5

Rating: 5/5