PowerCLI for ESXi Configurations

#Do not attempt on production servers

>>Host Networking vSS to vDS

$hostlist=gc .\Hosts.txt
Add-VDSwitchVMHost -VDSwitch XXXX -VMHost $hostlist

$getpnic=Get-VMHostNetworkAdapter -VMHost $hostlist -Physical -Name vmnic0
$addpnic= Add-VDSwitchPhysicalNetworkAdapter -VMHostPhysicalNic $getpnic
$getvss=Get-VirtualSwitch -Standard -VMHost $ESX
$getvds=Get-VDSwitch -Name XXXX
$MTU=Set-VDSwitch $getvds -Mtu 9000
$MTUPG=Get-VMHostNetworkAdapter -Name vmk1 -VMHost $hostlist
Set-VMHostNetworkAdapter -VirtualNic $MTUPG -Mtu 9000
Get-VMHost $hostlist | Get-VirtualSwitch -Name vSwitch0 | Set-VirtualSwitch -Mtu 9000
$dvportgroup=Get-VDPortgroup -Name $mgmt_portgroup -VDSwitch $getvds
Set-VMHostNetworkAdapter -PortGroup $dvportgroup -VirtualNic $MTUPG
$pnics=Get-VMHostNetworkAdapter -VMHost $hostlist -Physical


$myserver= Connect-VIServer -Server xxxxxx
Add-VMHost -Server $myserver -Name xxxx -Location xxxx -User xxx -Password xxxx -Force
$vmhost= Get-VMHost -name xxx
Set-VMHost -VMHost $vmhost -State Maintenance
Set-VMHost -VMHost $vmhost -LicenseKey xxxxx-xxxxx-xxxxx
Add-VMHostNtpServer -NtpServer “xxxxx” -VMHost $vmhost
$myvirtualswitch = New-VirtualSwitch -VMHost $vmhost -Name xxxx
$myvirtualportgroup1 = New-VirtualPortGroup -VirtualSwitch $myvirtualswitch -Name xxxx
New-VMHostNetworkAdapter -VMHost $vmhost -VirtualSwitch $myvirtualswitch -PortGroup $myvirtualportgroup1 -VMotionEnabled $true
New-Datastore -Nfs -VMHost $vmhost -Name xxxx -Path /xxx/ -NfsHost xxxx
Set-VMHost -VMHost $vmhost -State Connected

Import-Module VMware.VimAutomation.Vds
Connect-VIServer XXXX -User XXXX -Password XXXX
$vswitch = Get-Cluster -Name XXXX | Get-VirtualSwitch -Name vSwitch
Set-VDSwitch $vswitch -Mtu 9000

$hosts=gc C:\scripts\Mine\Set-MTU-Switch\Hosts.txt

$gethsots=Get-VMHost $hosts
$vmkernel = Get-VMHostNetworkAdapter -Name vmk1 -VMHost ($gethsots)
Set-VMHostNetworkAdapter -VirtualNic $vmkernel -Mtu 9000

Set-VirtualSwitch -VirtualSwitch vSwitch -Mtu 1500 -Server $ESXhost

Script to Configure the LAG:

#Create LacpGroupConfig Object – Next 8 lines are properties required by the lag
$lacpgroup = New-Object VMware.Vim.VMwareDvsLacpGroupConfig
$lacpgroup.Key = “1234”
$lacpgroup.Name = “lag1”
$lacpgroup.Mode = “active”
$lacpgroup.UplinkNum = “2”
$lacpgroup.LoadbalanceAlgorithm = “srcDestIpTcpUdpPortVlan”
$lacpgroup.Vlan = New-Object VMware.Vim.VMwareDvsLagVlanConfig
$lacpgroup.Ipfix = New-Object VMware.Vim.VMwareDvsLagIpfixConfig
$lacpgroup.UplinkName = “{lag1-1, lag1-2}”
 #Create LACPGroupSpec
$lacpconfig = New-Object VMware.Vim.VMwareDvsLacpGroupSpec
$lacpconfig.LacpGroupConfig = $lacpgroup
#Add the lacpgroup into the config
$lacpconfig.Operation = “add”
$target = get-vdswitch “DVSSWitch01” | get-view
#Apply and configure new lag on the target switch
$target.Config.LacpGroupConfig = $lacpgroup

vSAN 6.6.1

What is vSAN:

While vSphere ESXi pools the server resources together, the vSAN component of vSphere abstracts and pools the hypervisor storage.

What for? 

Offers better performance, high availability , lowers costs and lot more….

Configurations of vSAN :

they are 2 ways balancing the caching and capacity requirements in the vSAN cluster.

  1. Hybrid disk group: Atleast 1 SSD for caching and 1 or more hard disk(magnetic) for capacity needs.
  2. All flash disk group1 SSD for caching and 1 or more flash(SSD) devices
  • 3 ESX hosts are sufficient enough to build a vSAN cluster and can tolerate an ESX failure.
  • local storage/disks is not mandatory for vSAN cluster. It can have ESX without local storage as well.

Using webclient a vSAN cluster can be created by enabling vSAN on it, and start configuring the disks.

vSAN disabled

Network requirements :

Hybrid disk group: 1GB

All flash disk group configuration: 10 GB

vSAN ESX host must have a vmkernel dedicated and configured to serve vSAN traffic which is usually among the vSAN hosts.

vSAN likes both vSS and vDS switches.

Ex: an ESX having its VM on a disk of other ESX host, the IOPS happen through the vSAN vmkernel port group.

Deduplication and compression features :

The data first enters Cache tier or disks while processing and slowly the unused data blocks will be moved over to capacity tier from cache when the data is infrequently used or becomes older. This ensures capacity tier is free enough to serve other IOPS.


Claim disks:

The cache and capacity disks can be selected from the total available disks.

Claim Disks

Once after completing the vSAN cluster configuration, you can actually notice under the tasks bar, all the disks being configured to vSAN cluster.

Disks configuration

vSAN datastore :

A vsan datastore appears immediately after creation of vSAN cluster. You can find the capacity of the vSAN datastore under its ‘configuration’ tab, which is actually the aggregate of each ESXi capacity in the cluster after excluding vSAN overhead+Deduplication metadata.vSAN datastore

vSAN capacity = aggregate ESX capcity-(vSAN overhead[1% of physical capacity]+Deduplication metadata[variable])

Storage Provider :

Storage Management Service(SMS) from vCenter automatically creates a storage provider on each host to ensure the communication between the storage layer and the vCenter.Storage Providers

Thought is is automatic, ensure that one host has the storage provider enabled and in active status, while the storage providers on other hosts are in standby mode.

In case the active storage provider fails, the standby one takes over.

Add more nodes to vSAN cluster: Just get the host in MM and drag, drop into vSAN cluster.

vSAN configuration assist: This is the place to check vSAN health status. Click on the ‘Retest’ to know the health each time, and start fixing the warnings, errors.Configuration assistant

You can configure HA/DRS as well from the Configuration assistant.

vSAN storage:

vSAN provides RAID 1, RAID 5 and RAID 6 configurations to facilitate VM protection levels or fault tolerance level. RAID 5 requires a minimum of 4 ESX nodes while 6 nodes are needed by RAID 6 configuration.RAID5

vSAN facilitates protection levels on VM’s based on their priority.RAID 5 and RAID 6 configurations require less additional capacity while protection level is chosen for a VM, than RAID 1.RAID6

Ex: VM without protection level = uses normal VMDK size

VM with protection level(1) on RAID 5 or RAID 6 with protection level(2) = 1.33 *Actual VMDK size

VM with protection level(1) on RAID 1 = 2 *Actual VMDK size

How do you decide RAID 1 or RAID 5/6 on VM’s? 

It’s based on 2 factors. Performance and capacity.

You need performance then RAID 1 is ideal and if you’re bothered about the capacity not the performance then either RAID 5 or RAID 6 are best options.

Storage Policies :

Number of disk stripes per object: More stripes across the capacity disks, the more redundancy on VM’s.

Flash read cache reservation: Flash capacity is reserved for VM’s and will be specified as percentage of logical size of VMDK.

Primary level of failures to tolerate: To tolerate n failures = n+1 copies of VM and 2*n+1 ESX nodes contributing storage to vSAN

Force provisioning : Ignores object storage policies

Object space reservation: % of logical space needed for VM

Object checksum: Checks every time if the copy is same as original object(VM) and corrects of not.

Failure tolerance method: Either performance or capacity.

IOPS limit for objects: Controls IOPS on disk

Storage policies can be created based on the above factors and select them while provisioning a VM.

storage policy



vSAN VM Swap Object configurations:

By default the protection level for Swap object will be 1, because protection or redundancy is when a ESX server fails, it needs data. However in this situation while HA restarts the VM, the swap objects gets created automatically, once VM is powered on from a different ESX. Hence redundancy is not required for swap object.


Still there are 2 ways setting swap.

Thin method consumes more space while thick consumes very less.


As you all know, this is nothing but the way storage is connected to ESX servers i.e. through Ethernet cables.

NewSCSI target

Storage types:

Active-Active: LUN’s becomes accessible on all ports, paths unless a path fails

Active-Passive : Out of 2 or more Storage Processors, only one will be active while other is in standby

Asymmetrical storage system(ALUA): This is a kind of intelligent method. Host uses some paths as primary while others becomes secondary.

Virtual port storage system : Storage will be accessible through one single Virtual Port.

Now, its time to create iSCSI on vSAN, enable it and configure target, followed by that iSCSI initiator groups have to be created.

vSAN Encryption:

vSAN also allows data Encryption feature and this happens after all the process like deduplication and compression to ensure the data is secure and safe. This is also called data-at-rest encryption, because encryption happens both on cache and capacity disks only after all process like dedupliccation and compression, meaning the data is at rest.

This configuration requires an external Key Mangement Server(KMS), vCenter server and ESX hosts

vCenter server is responsible for requesting the KMS for a key while configuring the Encryption on vSAN cluster. Then, the KMS server generated and stores the keys on it. Now the vCenter pick them up and distributes them to ESX hosts.

One thing is that the vCenter does not store any keys on it, indeed it maintains the key ID’s assigned to hosts.

Since the KMS cluster provides encryption to vSAN datastore, we must set up the KMS cluster to support encryption. Then we need to establish a trust connection between KMS server and vCenter server.

KMSit comes to enabling the vSAN data encryption is happens at a glance and more over it is a seamless feature similar to other vSPhere features like HA, DRS.

The vSAN cluster encrypts the data including the virtual machine files and VMDK, as soon as it is enabled. Only the administrators with encryption privileges can perform Encryption and decryption tasks.KMS trust

The total process of encryption follow this manner>>

  • vCenter requests for Key Encryption Key(KEK) from KMS server and upon reception, it just stores the ID of the key but not the key itself.
  • The key will be assigned to ESX hosts and there happens disk level encryption on ESXi using a random Data Encryption Key.
  • Similarly each host in vSAM uses the KEK key from vCenter to generate DEK , stores them on each disks. Indeed the host does not store KEK, it is the vCenter that stores it.
  • When a host reboots, it does not mount the datastores unless it receives the KEK from vCenter, which could take couple of minute or even more at times.
  • The Coredumps will also be encrypted using host key and to de-crypt them you require the password.

Important points to note regarding Encryption:

  • You cannot configure KMS server on the same vSAN cluster
  • Encryption is CPU intensive job
  • The vSAN wintness host does not participate in Encryption
  • It encrypts the coredumps as well because they contain sensitive information, Coredump can contain Key for ESX hosts and the data on them.
  • Better always use a password while generating vm-support bundle.

Setting up Domain of trust 

The first step during encryption is setting up the domain trust among KMS server, vCenter and vSAN nodes using Public Key Infrastructure (PKI) method.

The communication begins once the trust is established. Key will be exchanged between vSAN hosts and KMS server. The vSAN host requests KMS for key showing the Key ID it has got from vCenter and KMS in return provides the key to hosts.

vSAN Health check allows us to ensure the communication among hosts and KMS server is functioning well.

2-node stretched cluster configuration:

This is mostly useful in remote office branches to save costs.


Preferred domain/preferred site, read locality,


You can follow below links for more information.




VMware support Doc

One more blog

Thanks for visiting……..

VCSA 6.x One stop for everything

VCSA filesystem

VMDK1 12GB / (10GB) Boot directory where the kernel images and boot load configurations go
/boot (132MB)
VMDK2 1.3GB /tmp Temporary directory used to store temporary files generated or used by services from vCenter Server
VMDK3 25GB SWAP Swap directory used when the system is out of memory to swap to disk
VMDK4 25GB /storage/core Core directory where core dumps from VPXD process from the vCenter Server are stored,
VMDK5 10GB /storage/log Log directory where vCenter Server stores all logs for the environment
VMDK6 10GB /storage/db VMware Postgres database storage location
VMDK7 5GB /storage/dblog VMware Postgres database logging location
VMDK8 10GB /storage/seat Stats, Events and Tasks (SEAT) directory for VMware Postgres
VMDK9 1GB /storage/netdump VMware Netdump collector repository that stores ESXi dumps
VMDK10 10GB /storage/autodeploy VMware Auto Deploy repository that stores the thinpackages used for stateless booting of ESXi hosts.
VMDK11 5GB /storage/invsvc VMware Inventory Service directory where the xDB, Inventory Service bootstrap configuration file, and tomcat configuration files reside.

This link has more details

  1. New VCSA 6.5

vSphere 6.7 highlights

  1. Simple and efficient Management Scale

    * VCSA 6.7 is 2 X faster than 6.5 and 3 X lower memory usage, New API’s included

  2. vSphere Quick boot
  3. Built in Security for hypervisor and VM’s through Virtual TPM 2.0 , vMotiona are encrypted (cross vCenter)
  4. Enhanced support for NVIDEA vCPU and PMEM
  5.  VM migrations within and across cloud are seamless now

Useful links :





vSphere AutoDeploy

What’s Auto Deploy?

It’s a group of software components which can be used to automatically deploy and provision ESXI hosts.

2 types of Auto Deploy Modes

  1. Stateful install
  2. Stateless cached install

Stateful install will have ESXi follow the auto Deploy process for initial deployments, configurations, and its saves image onto local disk. Starting from 2nd reboot forward ESX boots from local disk every time.

  • One benefit is it saves provisioning time of hosts

Stateless caching also follow auto deploy during first boot of your ESX, caches image to the local disk. Now all the successive reboots takes place through the auto deploy server and in case of auto deploy server being unavailable, ESX boots from the cached local disk, remember.

that’s the advantage

  • Now, there is one more  method called stateless install, where no local disk is there on ESX servers and every reboot happens through Auto deploy. It’s completely dependent on it and ESX fails to boot if the Auto deploy has issues serving the hosts. No logs saved on ESX, as there is no local disk on it, here, you may have to configure remote logging.



*Auto Deploy Server (Inbuilt in 6.x)

*Windows PowerShell 2.0

*vSphere PowerCLI

*vCenter Server

*TFTP server

*DHCP server

6.5 has AutoDeploy as GUI and powerCLI is not a requirement, though you can use it, still.

Utilizing these tools we can create different rules to deploy ESXI hosts with Variety of images and place them in respective clusters in vCenters.

License Requirements :

  • vSphere Enterprise Plus

Architecture in Brief: 

Assuming everything is configured. What happens when ESX boots from Autodeploy setup?

*gets a network boot, reaches DHCP and gets its IP.


*DHCP also provides TFTP server details and Boot file.

*ESX then fetches boot file in iPXE image and gets Tramp file which actually contains IP details of Auto deploy server

Till here Auto Deploy did nothing, it actually plays its role here.

*Host reaches Auto Deploy, and now It’s all the Auto deploy rules, patterns, that decides the image settings needed for this host.

Deployment rules are used to link image profiles and host profiles to your physical hosts. When you create a deployment rule, the VIBs identified in the image profile are uploaded to the Auto Deploy server so that they can can used by the hosts. The rules tie everything together. Once there are in place you can begin to use Auto Deploy to provision your ESXi hosts.


As you see above, Autodeploy identifies right ESX by any of those attributes and applies matching rule, which includes ESX image, configuration, destination cluster, vCenter etc.

What’s Host Profile job in Auto Deploy :

Once a first ESX has come through auto deploy server, gets configuration details and added in vCenter, we can create a host profile out of this and term it a reference host.

Now use this host profile, edit if changes needed and apply in auto deploy rules so that other hosts gets similar configuration as the first one, except network info.

Network details cannot be applied from host profiles as they are unique to each host. Hence, we use, so called, Answer Profile which exists per host storing IP details of each ESX and this answer file stays with host profile, applies to host during second boot.

Therefore ESX has 2 reboots in auto deploy process.

That’s it in brief…

For your information,  Getting Started with the New Image Builder GUI in vSphere 6.5

Indeed you can have multiple auto deploy servers too

Watch video on installing vSphere Auto Deploy

On 20th June 2018 :

There are basic things to look at, when you see a deploy rule is failing from autodeploy server.

Troubleshooting Auto Deploy Rules





🙂 It’s actually easy, read it again.




Lets start with vSphere 6.7 what’s New’s !

The announcement and release of vSphere 6.7vCenter 6.7 and vSAN 6.7 came out last week as you’ve likely already seen or read about.  This was a little surprising since VMworld 2018 is just around the corner and they usually reserve big releases like this until closer to the big show.  Does that mean we’re getting a full point version announced soon?  Only time will tell.

Speculation aside, this vSphere release is definitely worth checking out.  There’s a ton of enhancements and new features available that will certainly help any moves towards a hybrid cloud infrastructure.  It’s not without limitations of course which I’ll detail below.  Not the least of which being processor support and compatibility with other vSphere products.

In this article I’ll be going over a few of the most intriguing features and enhancements as well as those limitations.  Let’s break it down!

vSphere 6.7 Configuration Maximums

As usual with most big releases the configuration maximums tend to go up.  This release only shows a few increases but in some key areas that I’ll detail below.  VMware also has a great site that you can pull all of the configuration maximums from vSphere 6.0, 6.5, 6.5 U1 and 6.7 at https://configmax.vmware.com.  Here’s the most interesting changes I noticed:

You can have 16TB of RAM on a host now!  That’s crazy to even think about for most people.  Surprisingly or maybe not surprisingly to some there are hardware options where you can have even more RAM than that on a single physical server(depending on how you configure it of course).  The other changes center around storage and allowing more volumes and more paths to a host.  In the VVOLs arena they now allow up to 512 Protocol Endpoints per host.  I haven’t seen a ton of VVOLs adoption yet myself so I don’t know how big of a difference that will make but I know VMware is sure pushing VVOLs pretty hard.

vCenter 6.7 Updates

vCenter 6.7 has a number of changes and updates in this release.  A few notes about vCenter and the VCSA/PSC appliance before we get into the deep end:

  • This is the last version that will contain a Windows-based version of vCenter.
  • There is no upgrade path from vSphere/vCenter 5.5.  You will have to upgrade to 6.0 first.
  • /psc is now part of the vSphere Client under the Administration section divided between the Certificate Management and Configuration tabs.

With that out of the way let’s move onto the VCSA updates!

vCenter Server Appliance (VCSA)

vCenter with Embedded Platform Services Controller can now take advantage of Enhanced Linked Mode.  It was announced last year at VMworld 2017 and they finally got it baked into the VCSA.  You no longer need to have External Platform Services Controllers to enable Enhanced Linked Mode.  You also don’t need load balancers for high availability either.  This change supports all the vSphere scale maximums as well.  It also reduces the number of infrastructure components to manage and it’s easier to backup with the addition of File-Based Backup options.

There were significant improvements made to the vSphere Appliance Management Interface (VAMI) and as noted above consolidation of the /PSC functionality.  There are also performance improvements to vCenter as follows (All metrics compared at cluster scale limits, versus vSphere 6.5):

  • 2X faster performance in vCenter operations per second
  • 3X reduction in memory usage
  • 3X faster DRS-related operations (e.g. power-on virtual machine)

vSphere Client

Another huge step forward for the HTML5-based vSphere Client which is reported to be 95% feature complete compared to the vSphere Web Client (Flash Client).  Below are the additional workflows added to the vSphere Client although there are a few specific options within them that aren’t available.  (The announcement says NSX is one of the workflows but it’s not on the compatibility list and it’s noted in the release notes as not compatible.)

  • vSphere Update Manager
  • Content Library
  • vSAN
  • Storage Policies
  • Host Profiles
  • vDS Topology Diagram
  • Licensing

Management, Migration and Provisioning

vCenter Server Hybrid Linked Mode is now available in vSphere 6.7 which simplifies manageability and unifies visibility across an on-premises vSphere infrastructure and a vSphere-based public cloud infrastructure running different versions of vSphere.  A good example of that could be VMware Cloud on AWS.  This new set of features will allow for Cross-Cloud Cold and Hot Migration.  Let that soak in for a minute!

Another feature related to the last ones mentioned is Cross-vCenter Mixed Version Provisioning operations which includes vMotion, Full Clone and cold migrate.  To clarify you can now vMotion or create clones across vCenters of different versions.  I can see so many use cases for this including new infrastructure deployments where I don’t want or need to upgrade the old infrastructure but need to move the workloads to the new environment.

vRealize Operations Manager

The last thing I’ll mention here on the vCenter enhancements is the addition of vRealize Operations Manager Dashboards right inside of vCenter.  This feature requires vRealize Operations Manager 6.7 of course.  They’re slowly unifying management components with reporting and analytics and it’s a very welcome thing.  Being able to see vROPs information without having to open both interfaces is definitely a time saver.

vSphere 6.7 Updates

Along with all the updates to vCenter there are also a number of feature changes and updates to ESXi and vSphere generally that we’ll talk about here.  Some of these could be lumped into the vCenter section but I think are more generally related to vSphere and storage.  Let’s start with the ESXi 6.7 updates!

ESXi 6.7 Updates

The Single Reboot feature eliminates one of the two hardware reboots required during major version upgrades.  Previously on major version releases the hardware would reboot into the installer, install the update and then reboot again into the upgraded ESXi version.  Now with Single Reboot, the update is applied and then the hardware reboots directly to the upgraded version.  That should save quite a bit of time for administrators.

The next cool time saving feature is ESXi Quick Boot.  This feature will skip the hardware reboot (the BIOS or UEFI firmware reboot) that normally has to reinitialize all the hardware on a host before booting into ESXi.  That on most hosts takes a lot of time.  ESXi Quick Boot completely skips that process and just restarts ESXi in place saving all the hardware reboot time.  Here’s a quick link on how to enable the feature.  The only problem with it currently is it’s only supported on a small list of hardware detailed below or you can check if your hardware supports it through a script.

Supported platforms for ESXi Quick Boot:

  • Dell PowerEdge R630
  • Dell PowerEdge R640
  • Dell PowerEdge R730
  • Dell PowerEdge R740
  • HPE ProLiant DL360 Gen9
  • HPE ProLiant DL360 Gen10
  • HPE ProLiant DL380 Gen9
  • HPE ProLiant DL380 Gen10

vSphere 6.7

One of the swankier new features is vSphere Persistent Memory (PMEM) support.  PMEM is simply DRAM with Non-Volatile Memory on board that can store data like an SSD.  Imagine a DIMM with half memory and half Non-Volatile Memory that can be presented to the host as a datastore and you have Persistent Memory.  HPE and Dell both have supported options for this out now.  This puts your storage at the DRAM layer and as you can imagine greatly increases the speed at which you can access your data.  You can even attach Virtual NVDIMMs to compatible Guest OS’s taking a piece of the PMEM datastore and attaching it directly to your guest.  Here’s a table Dell put together describing the speed difference of standard storage types versus Persistent Memory in nanoseconds.

This release also sees new protocol support for Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE v2) and adds Paravirtualized RDMA (PVRDMA) support.  It introduces a new software Fibre Channel over Ethernet (FCoE) adapter and support for the iSCSI Extension for RDMA (iSER).  There are many new options to integrate with high performance storage platforms as well as the ability to bypass normal methods of connectivity and present even more types of storage directly to guest operating systems.

Lastly on the ESXi side you can now add multiple Syslog targets.  I’ve seen customers use vRealize Log Insight for their virtual environment and another product may be used by another team to correlate Syslogs.  Now they don’t have to choose where they go since you can add up to three Syslog targets in the VAMI.

General vSphere 6.7 Updates

One feature that is sure to be a game changer is Per-VM EVC (Enhanced vMotion Compatibility) mode.  Per-VM EVC allows you to set the processor generation at each individual VM as needed instead of on the entire cluster.  This allows you to migrate VM’s between clusters of hosts running different generation CPUs while the VMs are powered on.  No need to power them off or to set the EVC level on the cluster.  Just set it on the VM itself and vMotion.  This feature will make it significantly easier to migrate between clusters with different hardware seamlessly.

At VMworld 2017 last year I attended a really interesting session that detailed many of the features I’ve discussed today.  This next one was mentioned there as well.  vGPU is getting some love here and you can now Suspend and Resume vGPU-based VM’s to allow for migration between hosts.  They also indicated full vMotion compatibility was being worked on but that’s not here yet.  Suspend and Resume is a step in the right direction and will make it a little easier to maintain vGPU-based clusters.

VMware has also taken the initiative to help protect data in motion by enabling encrypted vMotion across different vCenters as well as different vSphere versions.  In the same security vein, vSphere 6.7 also adds support for Trusted Platform Module 2.0 (TPM 2.0) which works with Secure Boot to validate that you’re only running secure, signed code and disallows you from running unsigned code protecting your environment from certain types of attacks.  So that protects the physical hardware but “What about my Guest OS’s?” you may be asking.  vSphere 6.7 adds a feature called Virtual TPM 2.0 which presents a virtual TPM device to the guest and cryptographically protects your VM by storing the TPM data in the VM’s NVRAM file and securing that file with VM Encryption.  This allows that data to travel with the VM during migrations and ensures that each VM is protected and that protection is encapsulated with the VM rather than tied to a host or physical hardware.  Of note, to do VM Encryption you need a 3rd party key management service infrastructure.

vSphere 6.7 Storage Updates

In vSphere 6.5, VMware reintroduced Automatic UNMAP.  This feature basically works with storage that supports the vSphere Storage APIs for Array Integration (VAAI) primitives to allow certain storage tasks to be offloaded to the storage array hardware.  UNMAP is one of those tasks.  By running UNMAP you can reclaim VMFS deleted block on thin-provisioned LUNs.  vSphere 6.5 enabled this feature to run automatically.  vSphere 6.7 allows you to configure the UNMAP rate to better control the frequency/throughput utilized by the feature.  Previously it was at a static 25Mbps rate, but now is configurable between 100Mbps to 2000Mbps.  The UNMAP feature now also extends to SESparse disks on VMFS-6.  This only works when the VM is powered on and only affects the highest level snapshot.

vSphere 6.7

Finally, vSphere 6.7 also adds support for the 4Kn HDD drives as local storage but the SSD and NVMe drives are currently not supported for local storage.  VMware is providing a software read-modify-write layer to emulate the 512B sector drives.

vSphere 6.7 Final Thoughts and Support Issues

Ok so there’s a whole lot of good coming in this release.  So many cool new features.  Let’s not forget though, sometimes good things come with consequences.  For most companies on a 3-5 year refresh cycle you’re probably not going to be affected.  For those of us running homelabs on a little bit older gear you’re possibly going to run into issues.  First thing out of the gate is that CPU support for vSphere 6.7 has been truncated significantly.

vSphere 6.7 no longer supports the following processors:

  • AMD Opteron 13xx Series
  • AMD Opteron 23xx Series
  • AMD Opteron 24xx Series
  • AMD Opteron 41xx Series
  • AMD Opteron 61xx Series
  • AMD Opteron 83xx Series
  • AMD Opteron 84xx Series
  • Intel Core i7-620LE Processor
  • Intel i3/i5 Clarkdale Series
  • Intel Xeon 31xx Series
  • Intel Xeon 33xx Series
  • Intel Xeon 34xx Clarkdale Series
  • Intel Xeon 34xx Lynnfield Series
  • Intel Xeon 35xx Series
  • Intel Xeon 36xx Series
  • Intel Xeon 52xx Series
  • Intel Xeon 54xx Series
  • Intel Xeon 55xx Series
  • Intel Xeon 56xx Series
  • Intel Xeon 65xx Series
  • Intel Xeon 74xx Series
  • Intel Xeon 75xx Series

Of course I’m running Intel Xeon 5540’s on my Dell R710 hosts in my lab.  I’ve seen some posts of people being able to work around the issue with certain processor types which may hold some light at the end of the tunnel for me.  Interestingly the release notes indicate you will get a purple screen of death (PSOD) on unsupported CPU’s but I got a black screen with the following information instead.

vSphere 6.7Undeterred, I installed a nested ESXi 6.7 onto my VMware Workstation instance and then deployed the vCenter 6.7 VCSA.  Either way if your CPU’s are on this list you’ll need to consider upgrading before you upgrade to vSphere 6.7.

The vSphere 6.7 announcement was also slightly misleading in that it states several times that it adds functionality and workflows for NSX but if you read until the end or you check out the release notes they both indicate that there is currently no supported or compatible version of NSX that works with vSphere 6.7.  I get that you can add features for something that will be supported later but it’s troublesome when companies talk about them before they are a reality.  Just a pet peeve I guess.  Either way it looks like a new version of NSX that supports vSphere 6.7 is likely coming soon.

You’ll notice I didn’t talk about vSAN 6.7 here.  There really didn’t seem to be any major changes except a few under the hood and Windows Server Failover Cluster (WSFC) support.

Above I mentioned that I installed ESXi 6.7 onto VMware Workstation and vCenter 6.7 onto that nested host.  There’s no difference in the ESXi 6.7 install at all but the vCenter deployment is a bit changed with a new interface.  It’s much cleaner with a really streamlined look and feel.

vSphere 6.7vSphere 6.7

That’s all for now!  vSphere 6.7 is well worth checking out as far as I can tell.  I’m not sure why they released it as an incremental release instead of a full version release.  I’m also wondering why they announced just a few months prior to VMworld 2018.  Either way as usual VMware is on the right track.  Packed full of features and updates to continue the push towards a hybrid cloud datacenter and into the software defined future.  Thanks for reading!

How is HCI doing?

Is your current infrastructure holding you back?

When you look at your network diagrams, do you get a headache?


Yes, we definitely have challenges with our current model, Don’t you?

  • Hypervisor administration
  • Network administration with full redundancy
  • SAN (e.g., targets, LUNs, multipathing) or NAS administration
  • Hardware compatibility lists (HCL) management
  • Patching and updating multiple systems and related firmware
  • Dealing with multiple support groups when unexpected trouble occurs, which usually results in finger-pointing and inefficiency
  • Maintaining multiple maintenance contracts, which is costly and time-consuming


Plan Modernizing your Infra Now!

It’s Hyper Converged Infrastructure that does virtualize not just your Compute area (server) but also the storage, and Networking space of your datacenter. In simple, its a container of Compute, storage and Network together and imagine the speed you can achieve, cost that could be saving here, agility, ease of scaling datacenter and lot more, right!


Multiple vendors have come up offering HCI boxes like Nutanix, Cisco , VMware, Netapp, HP, EMC, etc. Its not new now, its years older, as they introduced it in 2012.

I’ve shown below the attibutes of HCI model, areas enhanced, compared to your setup now.



What are the elements comprising the HCI ? Here we go!



I got you, you are looking for a comparison of your current datacenter and HCI, right ?? Sure !



So, how does it look like? Like it?

As per EMC below are the ratio for datacenter footprint, costs calculated roughly on HCI platforms.



HCI Vendors/Providers:

  • DellEMC
  • GridStore:
  • HP: SimpliVity
  • ATLANTIS: Atlantis USX All-Flash Hyper-Converged Storage

How is DellEMC’s HCI doing ?

  • Choice of N/W

VXRAIL –vSphere Platforms

XC Series – Choice of Hypervisor

  • Fully Integrated N/W


VXRACK FLEX  -Sphere/Baremetal/Multi-hypervisor

EMC says, 




Hello HP! What do you think about it ?

I’ve SIMPLIVITY 380 that has, 

–(Data Dedupe, protection, backup)

–Disaster recovery and long-term backup retention

–VM centricity and mobility: Enables all actions, policies, and management at the VM level.


Finally whats there  !

  • Fully Defines SDDC
  • TCO reduction
  • Simplicity and Scalability
  • HA at all levels


That’s all…..You look like wasted your time 🙂

Whats Spectre & Meltdown !

For VMware, the mitigations fall into 3 different categories:

1. Hypervisor-Specific Mitigation : Mitigates leakage from the hypervisor or guest VMs into a malicious guest VM.
2. Hypervisor-Assisted Guest Mitigation : Guest OS can mitigate leakage between processes within the VM  …………………………………………>Patch Released
3. Operating System-Specific Mitigations : Mitigations for Operating Systems(OSes) are provided by your OS Vendors


vCenter Server 6.5 Update 1g
This updated version of vCenter Server provides part of the hypervisor-assisted guest mitigation of CVE-2017-5715 for guest operating systems

VMware ESXi 6.5
This ESXi patch provides part of the hypervisor-assisted guest mitigation of CVE-2017-5715 for guest operating systems.

Hypervisor-Assisted Guest Mitigation for Branch Target injection


Performance costs of the Meltdown/Spectre mitigations for VMware products.


How do I know whether patches applied correctly

Finally this doc. has everything in brief

Go patch your infra, right now !