Moving a KubeVirt VM Between Namespaces
KubeVirt does not have a native "move" operation for VMs across namespaces. The process involves cloning the VM's disk(s) to the target namespace, recreating the VM definition there, and then cleaning up the original.
Important: The VM must be stopped before starting this process. This ensures disk data is consistent and the PVC is not actively in use.
Prerequisites
kubectlaccess to the clustervirtctlinstalledkubectl-neatinstalled (optional but recommended — simplifies VM export)- The target namespace must already exist
- If using Multus networking, the appropriate
NetworkAttachmentDefinitionmust exist in the target namespace (or indefaultwith a cross-namespace reference)
Installing virtctl
virtctl is the KubeVirt CLI for managing virtual machines.
Option A: Download from GitHub Releases
# Get the latest version
KUBEVIRT_VERSION=$(curl -s https://api.github.com/repos/kubevirt/kubevirt/releases/latest | grep tag_name | cut -d '"' -f 4)
# Linux (amd64)
curl -L -o /usr/local/bin/virtctl \
https://github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/virtctl-${KUBEVIRT_VERSION}-linux-amd64
chmod +x /usr/local/bin/virtctl
# macOS (Apple Silicon)
curl -L -o /usr/local/bin/virtctl \
https://github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/virtctl-${KUBEVIRT_VERSION}-darwin-arm64
chmod +x /usr/local/bin/virtctl
Option B: Install via krew (if installed)
Installing krew (kubectl Plugin Manager)
krew is required to install kubectl-neat and other kubectl plugins.
Linux / WSL:
(
set -x; cd "$(mktemp -d)" &&
OS="$(uname | tr '[:upper:]' '[:lower:]')" &&
ARCH="$(uname -m | sed -e 's/x86_64/amd64/' -e 's/aarch64/arm64/')" &&
KREW="krew-${OS}_${ARCH}" &&
curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/latest/download/${KREW}.tar.gz" &&
tar zxvf "${KREW}.tar.gz" &&
./"${KREW}" install krew
)
macOS (Homebrew):
After installing, add krew to your PATH. Add this line to your ~/.bashrc, ~/.zshrc, or equivalent:
Restart your shell or run source ~/.bashrc / source ~/.zshrc, then verify:
Installing kubectl-neat
kubectl-neat removes cluster-managed fields (UIDs, timestamps, managed fields, status) from kubectl output, making it easy to export clean, reusable YAML.
Verify:
Variables
Set these once and use them throughout. Replace the example values with your own:
Step 1: Stop the VM
Wait for it to fully stop:
Verify:
kubectl get vm $VM_NAME -n $SOURCE_NS -o jsonpath='{.status.printableStatus}'
# Should show: Stopped
Step 2: Export the VM Definition
Option A: Using kubectl-neat (Recommended)
Save the full VM spec with cluster-managed fields automatically stripped:
Option B: Manual Cleanup (No Extra Tools)
If you do not have kubectl-neat installed:
Then open /tmp/${VM_NAME}-vm-export.yaml in a text editor and remove the following fields:
metadata.uidmetadata.resourceVersionmetadata.creationTimestampmetadata.generationmetadata.finalizersmetadata.managedFields(this can be very large)- Any
metadata.annotationsentries starting withkubevirt.io/ - The entire
status:block at the bottom of the file
Step 3: Clone All Disks to the Target Namespace
The following script automatically discovers all PVCs attached to the VM, reads their size, storage class, and access mode, generates the DataVolume clone manifests, and applies them. No manual YAML editing required.
Copy and paste this entire block:
#!/bin/bash
set -e
# --- These must match the variables you set earlier ---
# SOURCE_NS="virtual-machines"
# VM_NAME="my-vm"
# TARGET_NS="production-vms"
echo "Discovering disks for VM '$VM_NAME' in namespace '$SOURCE_NS'..."
# Extract all PVC names from the VM spec (handles both dataVolume and persistentVolumeClaim references)
PVC_NAMES=$(kubectl get vm "$VM_NAME" -n "$SOURCE_NS" -o go-template='{{range .spec.template.spec.volumes}}{{if .dataVolume}}{{.dataVolume.name}}{{"\n"}}{{else if .persistentVolumeClaim}}{{.persistentVolumeClaim.claimName}}{{"\n"}}{{end}}{{end}}')
if [ -z "$PVC_NAMES" ]; then
echo "ERROR: No PVCs found for VM '$VM_NAME'. Does the VM exist in namespace '$SOURCE_NS'?"
exit 1
fi
# Count disks
DISK_COUNT=$(echo "$PVC_NAMES" | wc -l | tr -d ' ')
echo "Found $DISK_COUNT disk(s) to clone:"
echo ""
# Display what will be cloned
printf " %-50s %-10s %-20s %-15s\n" "PVC NAME" "SIZE" "STORAGE CLASS" "ACCESS MODE"
printf " %-50s %-10s %-20s %-15s\n" "--------" "----" "-------------" "-----------"
while IFS= read -r PVC; do
[ -z "$PVC" ] && continue
SIZE=$(kubectl get pvc "$PVC" -n "$SOURCE_NS" -o jsonpath='{.status.capacity.storage}' 2>/dev/null)
SC=$(kubectl get pvc "$PVC" -n "$SOURCE_NS" -o jsonpath='{.spec.storageClassName}' 2>/dev/null)
ACCESS=$(kubectl get pvc "$PVC" -n "$SOURCE_NS" -o jsonpath='{.spec.accessModes[0]}' 2>/dev/null)
if [ -z "$SIZE" ]; then
echo " ERROR: Could not read PVC '$PVC' in namespace '$SOURCE_NS'. Is the VM stopped?"
exit 1
fi
printf " %-50s %-10s %-20s %-15s\n" "$PVC" "$SIZE" "$SC" "$ACCESS"
done <<< "$PVC_NAMES"
echo ""
read -p "Proceed with cloning these disks to namespace '$TARGET_NS'? (y/N): " CONFIRM
if [[ ! "$CONFIRM" =~ ^[Yy]$ ]]; then
echo "Aborted."
exit 0
fi
echo ""
# Clone each disk
while IFS= read -r PVC; do
[ -z "$PVC" ] && continue
SIZE=$(kubectl get pvc "$PVC" -n "$SOURCE_NS" -o jsonpath='{.status.capacity.storage}')
SC=$(kubectl get pvc "$PVC" -n "$SOURCE_NS" -o jsonpath='{.spec.storageClassName}')
ACCESS=$(kubectl get pvc "$PVC" -n "$SOURCE_NS" -o jsonpath='{.spec.accessModes[0]}')
echo "Cloning '$PVC' ($SIZE) to namespace '$TARGET_NS'..."
cat <<EOF | kubectl apply -f -
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: ${PVC}
namespace: ${TARGET_NS}
spec:
source:
pvc:
name: ${PVC}
namespace: ${SOURCE_NS}
storage:
accessModes:
- ${ACCESS}
resources:
requests:
storage: ${SIZE}
storageClassName: ${SC}
EOF
done <<< "$PVC_NAMES"
echo ""
echo "Waiting for all clones to complete..."
# Wait for each DataVolume to succeed
FAILED=0
while IFS= read -r PVC; do
[ -z "$PVC" ] && continue
echo " Waiting for '$PVC'..."
if kubectl wait --for=jsonpath='{.status.phase}'=Succeeded "dv/${PVC}" -n "$TARGET_NS" --timeout=600s 2>/dev/null; then
echo " '$PVC' clone complete."
else
echo " ERROR: '$PVC' clone did not complete within timeout. Check: kubectl get dv $PVC -n $TARGET_NS"
FAILED=1
fi
done <<< "$PVC_NAMES"
echo ""
if [ "$FAILED" -eq 0 ]; then
echo "All disks cloned successfully to namespace '$TARGET_NS'."
else
echo "WARNING: Some clones failed. Resolve errors before proceeding."
exit 1
fi
What the Script Does
- Queries the VM spec to find all attached PVCs (both
dataVolumeandpersistentVolumeClaimreferences) - Reads each PVC's size, storage class, and access mode directly from the cluster
- Displays a summary table and asks for confirmation before proceeding
- Generates and applies a CDI
DataVolumeclone manifest for each disk - Waits for all clones to reach
Succeededphase
Example Output
Discovering disks for VM 'my-vm' in namespace 'virtual-machines'...
Found 2 disk(s) to clone:
PVC NAME SIZE STORAGE CLASS ACCESS MODE
-------- ---- ------------- -----------
my-vm-redhat-9-static 50Gi portworx-block ReadWriteMany
my-vm-data-disk 2Ti portworx-block ReadWriteMany
Proceed with cloning these disks to namespace 'production-vms'? (y/N): y
Cloning 'my-vm-redhat-9-static' (50Gi) to namespace 'production-vms'...
datavolume.cdi.kubevirt.io/my-vm-redhat-9-static created
Cloning 'my-vm-data-disk' (2Ti) to namespace 'production-vms'...
datavolume.cdi.kubevirt.io/my-vm-data-disk created
Waiting for all clones to complete...
Waiting for 'my-vm-redhat-9-static'...
'my-vm-redhat-9-static' clone complete.
Waiting for 'my-vm-data-disk'...
'my-vm-data-disk' clone complete.
All disks cloned successfully to namespace 'production-vms'.
Tip: With Portworx CSI clones, this is typically fast because it uses copy-on-write at the storage layer.
Step 4: Update the VM Definition for the Target Namespace
Edit the exported VM YAML (/tmp/${VM_NAME}-vm-export.yaml):
- Change the namespace:
- Remove
dataVolumeTemplates(if present). Since we already cloned the disks in Step 3, the DataVolumes exist. Remove the entirespec.dataVolumeTemplatesblock:
spec:
# DELETE this entire block:
# dataVolumeTemplates:
# - metadata:
# name: my-vm-redhat-9-static
# spec:
# source:
# ...
-
Verify volume references still point to the correct PVC/DataVolume names (they should, since we used the same names in Step 3).
-
Update network references if needed. If the VM uses a Multus network like
default/vlan-13, verify that the NAD exists and is accessible from the target namespace:
If the NAD is in the default namespace with the reference default/vlan-13, it works from any namespace. If it is namespace-scoped to the source namespace, you will need to create a copy in the target namespace.
- Update cloud-init if the VM has static IP configuration or namespace-specific references embedded in
userDataornetworkData.
Step 5: Create the Target Namespace (If Needed)
Step 6: Create the VM in the Target Namespace
Verify it was created:
Step 7: Start the VM in the Target Namespace
Watch it come up:
Verify the VM is running and healthy:
Step 8: Validate
- VM is
Runningin the target namespace - All disks are attached and accessible inside the guest
- Network connectivity works (ping, SSH, etc.)
- Any services or ingress rules referencing the VM are updated to the new namespace
Step 9: Clean Up the Source Namespace
Only do this after you have fully validated the VM in the target namespace.
Delete the original VM (this will also delete its DataVolumes and PVCs):
If any orphaned PVCs remain:
Troubleshooting
| Issue | Solution |
|---|---|
| DataVolume clone stuck at 0% | Check CDI pods: kubectl get pods -n cdi. Verify the source PVC is not in use (VM must be stopped). |
| Clone fails with "cross-namespace clone not allowed" | Create a ClusterRole / ClusterRoleBinding granting CDI permission to read PVCs across namespaces. See CDI RBAC docs. |
| VM starts but no network | Verify the NetworkAttachmentDefinition is accessible from the target namespace. If it's namespace-scoped, recreate it in the target namespace. |
| VM starts but can't find disk | Ensure the PVC name in the VM spec matches the DataVolume name created in the target namespace. |
dataVolumeTemplates causes CDI to re-clone from golden image |
Remove dataVolumeTemplates from the VM spec. The pre-cloned DataVolume/PVC already exists. |