Skip to content

Moving a KubeVirt VM Between Namespaces

KubeVirt does not have a native "move" operation for VMs across namespaces. The process involves cloning the VM's disk(s) to the target namespace, recreating the VM definition there, and then cleaning up the original.

Important: The VM must be stopped before starting this process. This ensures disk data is consistent and the PVC is not actively in use.


Prerequisites

  • kubectl access to the cluster
  • virtctl installed
  • kubectl-neat installed (optional but recommended — simplifies VM export)
  • The target namespace must already exist
  • If using Multus networking, the appropriate NetworkAttachmentDefinition must exist in the target namespace (or in default with a cross-namespace reference)

Installing virtctl

virtctl is the KubeVirt CLI for managing virtual machines.

Option A: Download from GitHub Releases

# Get the latest version
KUBEVIRT_VERSION=$(curl -s https://api.github.com/repos/kubevirt/kubevirt/releases/latest | grep tag_name | cut -d '"' -f 4)

# Linux (amd64)
curl -L -o /usr/local/bin/virtctl \
  https://github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/virtctl-${KUBEVIRT_VERSION}-linux-amd64
chmod +x /usr/local/bin/virtctl

# macOS (Apple Silicon)
curl -L -o /usr/local/bin/virtctl \
  https://github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/virtctl-${KUBEVIRT_VERSION}-darwin-arm64
chmod +x /usr/local/bin/virtctl

Option B: Install via krew (if installed)

kubectl krew install virt
# Then use as: kubectl virt <command>

Installing krew (kubectl Plugin Manager)

krew is required to install kubectl-neat and other kubectl plugins.

Linux / WSL:

(
  set -x; cd "$(mktemp -d)" &&
  OS="$(uname | tr '[:upper:]' '[:lower:]')" &&
  ARCH="$(uname -m | sed -e 's/x86_64/amd64/' -e 's/aarch64/arm64/')" &&
  KREW="krew-${OS}_${ARCH}" &&
  curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/latest/download/${KREW}.tar.gz" &&
  tar zxvf "${KREW}.tar.gz" &&
  ./"${KREW}" install krew
)

macOS (Homebrew):

brew install krew

After installing, add krew to your PATH. Add this line to your ~/.bashrc, ~/.zshrc, or equivalent:

export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"

Restart your shell or run source ~/.bashrc / source ~/.zshrc, then verify:

kubectl krew version

Installing kubectl-neat

kubectl-neat removes cluster-managed fields (UIDs, timestamps, managed fields, status) from kubectl output, making it easy to export clean, reusable YAML.

kubectl krew install neat

Verify:

kubectl neat --help

Variables

Set these once and use them throughout. Replace the example values with your own:

# Source
SOURCE_NS="virtual-machines"
VM_NAME="my-vm"

# Target
TARGET_NS="production-vms"

Step 1: Stop the VM

virtctl stop $VM_NAME -n $SOURCE_NS

Wait for it to fully stop:

kubectl wait --for=delete vmi/$VM_NAME -n $SOURCE_NS --timeout=120s

Verify:

kubectl get vm $VM_NAME -n $SOURCE_NS -o jsonpath='{.status.printableStatus}'
# Should show: Stopped

Step 2: Export the VM Definition

Save the full VM spec with cluster-managed fields automatically stripped:

kubectl get vm $VM_NAME -n $SOURCE_NS -o yaml | \
  kubectl neat > /tmp/${VM_NAME}-vm-export.yaml

Option B: Manual Cleanup (No Extra Tools)

If you do not have kubectl-neat installed:

kubectl get vm $VM_NAME -n $SOURCE_NS -o yaml > /tmp/${VM_NAME}-vm-export.yaml

Then open /tmp/${VM_NAME}-vm-export.yaml in a text editor and remove the following fields:

  • metadata.uid
  • metadata.resourceVersion
  • metadata.creationTimestamp
  • metadata.generation
  • metadata.finalizers
  • metadata.managedFields (this can be very large)
  • Any metadata.annotations entries starting with kubevirt.io/
  • The entire status: block at the bottom of the file

Step 3: Clone All Disks to the Target Namespace

The following script automatically discovers all PVCs attached to the VM, reads their size, storage class, and access mode, generates the DataVolume clone manifests, and applies them. No manual YAML editing required.

Copy and paste this entire block:

#!/bin/bash
set -e

# --- These must match the variables you set earlier ---
# SOURCE_NS="virtual-machines"
# VM_NAME="my-vm"
# TARGET_NS="production-vms"

echo "Discovering disks for VM '$VM_NAME' in namespace '$SOURCE_NS'..."

# Extract all PVC names from the VM spec (handles both dataVolume and persistentVolumeClaim references)
PVC_NAMES=$(kubectl get vm "$VM_NAME" -n "$SOURCE_NS" -o go-template='{{range .spec.template.spec.volumes}}{{if .dataVolume}}{{.dataVolume.name}}{{"\n"}}{{else if .persistentVolumeClaim}}{{.persistentVolumeClaim.claimName}}{{"\n"}}{{end}}{{end}}')

if [ -z "$PVC_NAMES" ]; then
  echo "ERROR: No PVCs found for VM '$VM_NAME'. Does the VM exist in namespace '$SOURCE_NS'?"
  exit 1
fi

# Count disks
DISK_COUNT=$(echo "$PVC_NAMES" | wc -l | tr -d ' ')
echo "Found $DISK_COUNT disk(s) to clone:"
echo ""

# Display what will be cloned
printf "  %-50s %-10s %-20s %-15s\n" "PVC NAME" "SIZE" "STORAGE CLASS" "ACCESS MODE"
printf "  %-50s %-10s %-20s %-15s\n" "--------" "----" "-------------" "-----------"

while IFS= read -r PVC; do
  [ -z "$PVC" ] && continue
  SIZE=$(kubectl get pvc "$PVC" -n "$SOURCE_NS" -o jsonpath='{.status.capacity.storage}' 2>/dev/null)
  SC=$(kubectl get pvc "$PVC" -n "$SOURCE_NS" -o jsonpath='{.spec.storageClassName}' 2>/dev/null)
  ACCESS=$(kubectl get pvc "$PVC" -n "$SOURCE_NS" -o jsonpath='{.spec.accessModes[0]}' 2>/dev/null)

  if [ -z "$SIZE" ]; then
    echo "  ERROR: Could not read PVC '$PVC' in namespace '$SOURCE_NS'. Is the VM stopped?"
    exit 1
  fi

  printf "  %-50s %-10s %-20s %-15s\n" "$PVC" "$SIZE" "$SC" "$ACCESS"
done <<< "$PVC_NAMES"

echo ""
read -p "Proceed with cloning these disks to namespace '$TARGET_NS'? (y/N): " CONFIRM
if [[ ! "$CONFIRM" =~ ^[Yy]$ ]]; then
  echo "Aborted."
  exit 0
fi

echo ""

# Clone each disk
while IFS= read -r PVC; do
  [ -z "$PVC" ] && continue

  SIZE=$(kubectl get pvc "$PVC" -n "$SOURCE_NS" -o jsonpath='{.status.capacity.storage}')
  SC=$(kubectl get pvc "$PVC" -n "$SOURCE_NS" -o jsonpath='{.spec.storageClassName}')
  ACCESS=$(kubectl get pvc "$PVC" -n "$SOURCE_NS" -o jsonpath='{.spec.accessModes[0]}')

  echo "Cloning '$PVC' ($SIZE) to namespace '$TARGET_NS'..."

  cat <<EOF | kubectl apply -f -
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
  name: ${PVC}
  namespace: ${TARGET_NS}
spec:
  source:
    pvc:
      name: ${PVC}
      namespace: ${SOURCE_NS}
  storage:
    accessModes:
      - ${ACCESS}
    resources:
      requests:
        storage: ${SIZE}
    storageClassName: ${SC}
EOF

done <<< "$PVC_NAMES"

echo ""
echo "Waiting for all clones to complete..."

# Wait for each DataVolume to succeed
FAILED=0
while IFS= read -r PVC; do
  [ -z "$PVC" ] && continue
  echo "  Waiting for '$PVC'..."
  if kubectl wait --for=jsonpath='{.status.phase}'=Succeeded "dv/${PVC}" -n "$TARGET_NS" --timeout=600s 2>/dev/null; then
    echo "  '$PVC' clone complete."
  else
    echo "  ERROR: '$PVC' clone did not complete within timeout. Check: kubectl get dv $PVC -n $TARGET_NS"
    FAILED=1
  fi
done <<< "$PVC_NAMES"

echo ""
if [ "$FAILED" -eq 0 ]; then
  echo "All disks cloned successfully to namespace '$TARGET_NS'."
else
  echo "WARNING: Some clones failed. Resolve errors before proceeding."
  exit 1
fi

What the Script Does

  1. Queries the VM spec to find all attached PVCs (both dataVolume and persistentVolumeClaim references)
  2. Reads each PVC's size, storage class, and access mode directly from the cluster
  3. Displays a summary table and asks for confirmation before proceeding
  4. Generates and applies a CDI DataVolume clone manifest for each disk
  5. Waits for all clones to reach Succeeded phase

Example Output

Discovering disks for VM 'my-vm' in namespace 'virtual-machines'...
Found 2 disk(s) to clone:

  PVC NAME                                           SIZE       STORAGE CLASS        ACCESS MODE
  --------                                           ----       -------------        -----------
  my-vm-redhat-9-static                              50Gi       portworx-block       ReadWriteMany
  my-vm-data-disk                                    2Ti        portworx-block       ReadWriteMany

Proceed with cloning these disks to namespace 'production-vms'? (y/N): y

Cloning 'my-vm-redhat-9-static' (50Gi) to namespace 'production-vms'...
datavolume.cdi.kubevirt.io/my-vm-redhat-9-static created
Cloning 'my-vm-data-disk' (2Ti) to namespace 'production-vms'...
datavolume.cdi.kubevirt.io/my-vm-data-disk created

Waiting for all clones to complete...
  Waiting for 'my-vm-redhat-9-static'...
  'my-vm-redhat-9-static' clone complete.
  Waiting for 'my-vm-data-disk'...
  'my-vm-data-disk' clone complete.

All disks cloned successfully to namespace 'production-vms'.

Tip: With Portworx CSI clones, this is typically fast because it uses copy-on-write at the storage layer.


Step 4: Update the VM Definition for the Target Namespace

Edit the exported VM YAML (/tmp/${VM_NAME}-vm-export.yaml):

  1. Change the namespace:
metadata:
  name: my-vm
  namespace: production-vms    # <-- Change this
  1. Remove dataVolumeTemplates (if present). Since we already cloned the disks in Step 3, the DataVolumes exist. Remove the entire spec.dataVolumeTemplates block:
spec:
  # DELETE this entire block:
  # dataVolumeTemplates:
  #   - metadata:
  #       name: my-vm-redhat-9-static
  #     spec:
  #       source:
  #         ...
  1. Verify volume references still point to the correct PVC/DataVolume names (they should, since we used the same names in Step 3).

  2. Update network references if needed. If the VM uses a Multus network like default/vlan-13, verify that the NAD exists and is accessible from the target namespace:

kubectl get net-attach-def vlan-13 -n default

If the NAD is in the default namespace with the reference default/vlan-13, it works from any namespace. If it is namespace-scoped to the source namespace, you will need to create a copy in the target namespace.

  1. Update cloud-init if the VM has static IP configuration or namespace-specific references embedded in userData or networkData.

Step 5: Create the Target Namespace (If Needed)

kubectl get ns $TARGET_NS || kubectl create ns $TARGET_NS

Step 6: Create the VM in the Target Namespace

kubectl apply -f /tmp/${VM_NAME}-vm-export.yaml

Verify it was created:

kubectl get vm -n $TARGET_NS

Step 7: Start the VM in the Target Namespace

virtctl start $VM_NAME -n $TARGET_NS

Watch it come up:

kubectl get vmi -n $TARGET_NS -w

Verify the VM is running and healthy:

kubectl get vmi $VM_NAME -n $TARGET_NS -o wide

Step 8: Validate

  • VM is Running in the target namespace
  • All disks are attached and accessible inside the guest
  • Network connectivity works (ping, SSH, etc.)
  • Any services or ingress rules referencing the VM are updated to the new namespace

Step 9: Clean Up the Source Namespace

Only do this after you have fully validated the VM in the target namespace.

Delete the original VM (this will also delete its DataVolumes and PVCs):

kubectl delete vm $VM_NAME -n $SOURCE_NS

If any orphaned PVCs remain:

kubectl get pvc -n $SOURCE_NS | grep $VM_NAME
kubectl delete pvc <pvc-name> -n $SOURCE_NS

Troubleshooting

Issue Solution
DataVolume clone stuck at 0% Check CDI pods: kubectl get pods -n cdi. Verify the source PVC is not in use (VM must be stopped).
Clone fails with "cross-namespace clone not allowed" Create a ClusterRole / ClusterRoleBinding granting CDI permission to read PVCs across namespaces. See CDI RBAC docs.
VM starts but no network Verify the NetworkAttachmentDefinition is accessible from the target namespace. If it's namespace-scoped, recreate it in the target namespace.
VM starts but can't find disk Ensure the PVC name in the VM spec matches the DataVolume name created in the target namespace.
dataVolumeTemplates causes CDI to re-clone from golden image Remove dataVolumeTemplates from the VM spec. The pre-cloned DataVolume/PVC already exists.