vCenter Migration with DVS

Migrating an ESXi Host with a Distributed Switch to different vCenter

This was written to explain complexities and provide guidance on migrating ESXi hosts to a new install of vCenter 6.7. VMware has an official response for versions 4.0 through 6.5, https://kb.vmware.com/s/article/1029498. However, I would like to break this down and show how it applies to 6.7 and how we can improve this process to reduce time to implement.

    1. Back up the vSphere Distributed Switch configuration. For more information, see Exporting/importing/restoring Distributed Switch configs using vSphere Web client (2034602).

In order to accomplish this step, for the migration to 6.7, the Distributed Switch version on the source must be at 6.0.0 when it is exported. Any older version of the Distributed Switch must be upgraded. If any of the ESXi Hosts must be left on 5.5, then those ESXi Hosts must be disconnected from the Distributed Switch so it may be upgraded to 6.0.0. The following steps explain how this can be accomplished.

    1. Create a Standard Switch for the virtual machine, VMkernel, and Service Console port group depending on the environment on the ESX host you are migrating.
    2. Move the ESX host to the Standard Switch you created in Step 2 and disable vSphere Distributed Switch on the ESX host you are migrating. For more information, see Disabling vNetwork Distributed Switches (1010718).

These steps require more effort than is lead on by these instructions. Besides migrating VMkernel ports for Management and vMotion, each Virtual Machine must also have its networking moved to a similarly created standard switch portgroup. There are some rather glaring caveats with this. Depending on the size and of the environment, limiting the number of portgroups and ports created to only the required per each individual host is a must, as to not exceed configuration maximums.

    1. Check if the vCenter Server cluster is configured with EVC or not. If the destination vCenter Server is configured with EVC, all virtual machines must be powered off while adding the host to new vCenter Server.

If the source is equal to the destination this is a non-issue and should be pre-configured as such. In our case we are not changing the cluster layout so the clusters are created prior to migration of the host.

  1. Remove the ESX host from the cluster and add it to the new vCenter Server.
  2. Create distributed switches in the new vCenter Server.
  3. Note: To import vSphere Distributed Switch configuration from the backup created in Step 1, see Exporting/importing/restoring Distributed Switch configs using vSphere Web Client (2034602).

The instructions have you do this after connecting the ESXi Host on the destination vCenter. I strongly recommend this be done as step 2 to avoid any misconfigurations or complications at this point.

  1. Migrate the new host networking from Standard Switch to vSphere Distributed Switch. For more information, see Migrating Service Console and/or VMkernel port from Standard Switches to Distributed Switch (1010614) and Migrating virtual machines between vSwitch or PortGroups to vDS or dvPortgroups (1010612).

Effectively in this step, each individual Virtual Machine network adapter is migrated back to the appropriate distributed switch portgroup.


By removing the steps to migrate to a standard switch, we can reduce the level of effort required even further by taking advantage of 6.7's enhanced distributed switch management features and the always present “In-Memory VM Configuration”. I’ll explain this further.

In this demonstration we have already exported and imported the distributed switch from a 6.0 environment to a fresh 6.7 installation. However if you are coming from a

1. Export the Distributed Switch

2. Import Distributed Switch

3. Document Virtual Machine Networking

4. Disconnect ESXi Host from Source vCenter

a. DO NOT REMOVE

b. The data persists on the source vCenter and can be queried for information should any step fail and can be restored should issues arise.

c. All running VM data and networking is persistent in memory and has no impact.

5. Connect ESXi Host on Destination vCenter

a. Attach to appropriate cluster compute resource

6. Add ESXi Host to Distributed Switch

a. Configure the Uplinks

b. Migrate VMKernel Ports

c. Migrate VM Network Adapters

i. This step in the web client can be skipped if, it will be accomplished via other means (I.E. PowerCLI), as the VM configuration persists in memory no impact is noticed.

ii. This is the most mistake prone step. Any incorrect configuration here may result in Virtual Machine outage. As there may be a large number of VM network adapters to configure I recommend skipping this and using PowerCLI for less risk.

PowerCLI Methodology

#Collect VM Networking Data in a variable (STEP 3)

$VMadapters = Get-VMHost <hostname> | Get-VM | Get-NetworkAdapter | select parent,name,networkname

#Export, just in case

$VMadapters | Export-Csv <Hostname.csv> -NoTypeInformation –UseCulture

#Disconnect Source ESXi Host from Source vCenter

Get-VMHost | Set-VMHost -State Disconnected –Confirm:$false

Disconnect-VIServer * -Confirm:$false

#In the Web Client connect the ESXi Host to the DVS (STEP 6, a. & b.) Skip VM Adapter configuration

#Connect to Destination vCenter

Connect-VIServer <Destination vCenter>

#Configure VM Adapters

$VMadapters | %{Get-VM $_.parent | Get-NetworkAdapter -Name $_.name | Set-NetworkAdapter -NetworkName $_.networkname -Confirm:$false}

After these configurations are complete the ESXi Host must be updated. During the migrations for maintenance mode there may be an issue with vMotion failing on a VM. This is a known issue and must be handled on a case by case basis.

If vMotion fails after migration

To Resolve:

1. Power Down the VM

2. Remove the VM from Inventory

3. Register VM

4. Power On