Skip to content

vSphere to KubeVirt Migration via Forklift

Overview

Forklift migrates VMs from VMware vSphere to KubeVirt by converting disk images with virt-v2v and creating corresponding KubeVirt VirtualMachine resources.


Environment Details

Component Value
Forklift namespace konveyor-forklift
vCenter txvmvcsa01.mouser.lan
Provider secret tx-esxcl04-ul-rvsgw (in konveyor-forklift ns)
Target storage class portworx-block
Target network Multus vlan-13

Forklift Architecture

  1. Provider - Connects to vCenter and inventories VMs, networks, datastores.
  2. Plan - Defines which VMs to migrate and how to map storage/network.
  3. Migration - Executes the plan, creating virt-v2v pods to convert disks and importer pods to write data to PVCs.

OS Detection Bug

Known Issue: OS Detection Mismatch

Forklift reads summary.guest.guestId (reported by VMware Tools at runtime) instead of config.guestId (configured by the admin in vSphere). When VMware Tools is not running or not installed, vSphere reports otherGuest64 regardless of the actual OS.

Impact: The forklift-vsphere-osmap ConfigMap maps vSphere guest IDs to KubeVirt instance type preferences. An incorrect guest ID may result in the wrong KubeVirt preference being applied.

Why it's usually cosmetic: virt-v2v performs its own disk inspection as a fallback and correctly identifies the OS during conversion. The resulting VM will work correctly even if the preference metadata is wrong.

To view the current OS mappings:

```bash copy kubectl get configmap forklift-vsphere-osmap -n konveyor-forklift -o yaml

!!! note
    This bug has not yet been filed upstream on [kubev2v/forklift](https://github.com/kubev2v/forklift).

---

## SecureBoot / SMM

!!! warning "EFI SecureBoot Requires SMM"
    VMs with EFI SecureBoot enabled in vSphere will fail migration if the **SMM (System Management Mode)** feature gate is not enabled in KubeVirt.

**Fix:** Add `SMM` to `additionalFeatureGates` in the VMO pack configuration:

```yaml
additionalFeatureGates:
  - SMM

Without SMM, the migrated VM cannot boot because SecureBoot requires SMM emulation.


Storage Mapping

All migrated VM disks are mapped to the portworx-block storage class:

storageMap:
  - source:
      id: <vsphere-datastore-id>
    destination:
      storageClass: portworx-block
      accessMode: ReadWriteOnce
      volumeMode: Filesystem

Tip

Ensure sufficient Portworx storage capacity before starting a large migration batch. Check available storage:

bash copy PX_POD=$(kubectl get pods -l name=portworx -n portworx -o jsonpath='{.items[0].metadata.name}') && \ kubectl exec $PX_POD -n portworx -- /opt/pwx/bin/pxctl status


Network Mapping

Source vSphere networks are mapped to the Multus vlan-13 NetworkAttachmentDefinition:

networkMap:
  - source:
      id: <vsphere-network-id>
    destination:
      name: vlan-13
      namespace: default
      type: multus

VMs will receive IPs directly on the physical VLAN 13 network via the bridge CNI.


Running a Migration

Check Forklift Provider Status

```bash copy kubectl get providers -n konveyor-forklift

### View Migration Plans

```bash copy
kubectl get plans -n konveyor-forklift

Watch Migration Progress

```bash copy kubectl get migrations -n konveyor-forklift -o wide

### View virt-v2v Conversion Logs

Find the conversion pod and tail its logs:

```bash copy
kubectl get pods -n konveyor-forklift -l migration

bash copy kubectl logs <virt-v2v-pod> -n konveyor-forklift -f


Post-Migration Checklist

  1. Verify the VM is running:

    bash copy kubectl get vmi -n virtual-machines -o wide

  2. Verify disk attachment:

    bash copy kubectl get pvc -n virtual-machines

  3. Verify network connectivity (VM should have an IP on VLAN 13):

    bash copy virtctl console <vm-name> -n virtual-machines

  4. Install or verify qemu-guest-agent is running inside the VM for proper reporting.