Migrating from VMware to Proxmox

I moved a mid-sized estate from VMware to Proxmox over six months. The bits that mattered were the Proxmox configuration, the network mapping, the backup pattern, and the firewall rules. The rest was just tedious.

Preparation matters more than the move itself. Start with an inventory: list VM disks, guest OSes, CPU and memory, network VLANs, and any tied licences. I made a spreadsheet with VM name, vCPU, RAM, disk type, datastore path, VLANs, and a note if the app binds to hardware. That makes VMDK chains, RDMs, and vendor tools that expect VMware obvious before you are halfway through a conversion.

Sort the storage layout in Proxmox before you start. Decide whether to use ZFS on the hosts, LVM-Thin, or an external SAN. ZFS gives snapshots and checksum, but it needs RAM; I used 1GB RAM per TB of ZFS metadata as a rule of thumb for small clusters. Pick the Proxmox node that will take the migration. Set the time zone and NTP, register a repository if you use paid repos, configure hostnames, and set up a management bridge (vmbr0). Decide up front whether Linux workloads stay as QEMU guests or become LXC containers. Containers save space and are simpler for OS-level services. For anything needing passthrough, check IOMMU on the hardware and enable it in the kernel boot args.

For each VM, I kept the same process:

  1. Take a working VMware snapshot or backup. Do not rely on hot-convert without a fallback.
  2. Export the disk. For a quick route, use qemu-img convert to change VMDK to qcow2:
qemu-img convert -p -f vmdk -O qcow2 source.vmdk dest.qcow2

For large disks, convert on a host with fast IO. Compression only helps if CPU is cheap.

  1. Import into Proxmox storage with qm importdisk <vmid> dest.qcow2 <storage>. That writes the disk into Proxmox-managed storage and lets you attach it to the VM config.
  2. Recreate the VM config in Proxmox or adapt the existing file. Set machine type, BIOS (OVMF for UEFI guests), CPU type (host to pass through features), and ballooning if you use it.
  3. Boot the VM with serial console or VNC and check guest drivers. Install QEMU guest agent in the guest and enable it in the VM options for clean shutdowns and better backups.

For network configuration, match the VMware port groups and VLAN tagging. My usual pattern was a single management bridge (vmbr0) for out-of-band management, then additional bridges per physical trunk or per function. If you need NIC redundancy, configure Linux bonding on the Proxmox host, then attach the bond to the bridge. For VLANs, either make the bridge VLAN-aware and tag on the guest NICs, or use separate bridges per VLAN if you want hard separation. Keep an annotated network map. If an app uses a specific MAC or licensing tied to NIC hardware, mirror the MAC in Proxmox to avoid licence resets. Test inter-VM and external connectivity with iperf and ping before you touch DNS.

Pick a backup pattern and stick to it. I would pair Proxmox with Proxmox Backup Server if you can; PBS gives deduplicated, encrypted backups and faster restores. If not, use vzdump in snapshot mode for consistent backups of KVM guests:

vzdump --mode snapshot --compress zstd --storage backup

Use a simple retention policy: daily incremental or full with a seven-day rotation, plus a weekly full kept longer offsite. For critical VMs, add hourly incremental snapshots. Restore at least one VM per month to a sandbox host and boot it. A file check is not enough. Label backups with the VMID, date, and Proxmox node so the source is easy to trace.

Firewall rules get messy unless you standardise them. Use the Proxmox host firewall for rules between management networks and the public edge. Enable the firewall at datacentre level for broad rules, then use host or VM level for finer control. My pattern was:

  • Block all ports on management networks except SSH from a bastion.
  • Allow only required service ports on VM interfaces.
  • Use vendor-recommended port ranges for cluster communication and the Proxmox API.

When you need stateful filtering inside guests, keep the guest firewall so the rules move with the VM. Keep the rules simple. Test them with nc or nmap from a controlled source. Write a one-line comment for every rule in the change log.

After the move, watch licences and vendor behaviour. Some commercial software rebinds licences to hardware IDs and may need vendor reactivation. A couple of apps in my estate needed support calls after they landed on new hardware. Check monitoring and alerting too. Add the Proxmox cluster to monitoring, and track replication health if you use DRBD or ZFS replicate. Test failover by moving a non-critical VM between nodes and watching CPU, memory, and IO metrics. Keep a short runbook with recovery commands: how to start a stopped cluster node, how to remount a lost storage, and how to import an exported disk to a new VMID.

The useful bits were simple enough: keep an inventory, convert disks with qemu-img, import with qm importdisk, standardise bridges and VLAN mapping, pick PBS or vzdump, and verify restores. Leave the firewall rules minimal and documented, and budget time for licence grief. That was the difference between a migration and a long afternoon of swearing at new hardware.

Related posts

Vector | vdev-v0.3.3

Vector vdev v0 3 3: patch release with crash, leak and parsing fixes, connector and tooling improvements, upgrade notes on prechecks, rolling updates, compat

Loki | v3.7.2

Loki v3 7 2: security and CVE fixes, updated S3 client to aws sdk v1 97 3, ruler panic fix for unset validation scheme, S3 Object Lock sends SHA256 checksum

Loki | v3.7.2

Loki v3 7 2: Patch release with CVE fixes, AWS S3 SDK update, ruler panic fix, S3 Object Lock SHA256 checksum support