img migrating from vmware to proxmox a practical approach proxmox configuration

Migrating from VMware to Proxmox: A practical approach

I moved a mid-sized estate from VMware to Proxmox over six months. This guide shows the Proxmox configuration decisions that mattered. Read it for practical steps on KVM setup, network configuration, backup patterns and firewall rules. I keep commands and checks specific so you can reproduce them.

Preparation matters more than the migration itself. Start with an inventory: list VM disks, guest OSes, CPU and memory, network VLANs and any tied licences. Make a short spreadsheet with VM name, vCPU, RAM, disk type (thin/thick), datastore path, VLANs and a note if the app binds to hardware. You want to spot VMDK chains, RDMs or vendor tools that expect VMware. Plan the storage layout in Proxmox now. Decide whether to use ZFS on the hosts, LVM-Thin, or an external SAN. ZFS gives snapshots and checksum, but it needs RAM; plan 1GB RAM per TB of ZFS metadata as a rule of thumb for small clusters. Pick your target Proxmox node to be the migration sink. Prepare the node with the desired Proxmox configuration: set the correct time zone and NTP, register a repository if you use paid repos, configure hostnames and set up a management bridge (vmbr0). Decide on KVM setup choices up front: whether to use QEMU guests or LXC containers for Linux workloads. Containers save space and are simpler for OS-level services. Note which VMs need full device passthrough, then check IOMMU on the hardware and enable it in the kernel boot args.

When you execute the migration, follow a repeatable process for each VM. Keep the steps numbered so you do the same thing every time:

  1. Take a working VMware snapshot or backup. Do not rely on hot-convert without a fallback.
  2. Export the disk. For a quick route, use qemu-img convert to change VMDK to qcow2: qemu-img convert -p -f vmdk -O qcow2 source.vmdk dest.qcow2. For large disks, convert on a host with fast IO; compression helps only if CPU is cheap.
  3. Import into Proxmox storage with qm importdisk dest.qcow2 . That writes the disk into the Proxmox-managed format and lets you assign it to the VM config.
  4. Recreate the VM config in Proxmox or adapt the existing file. Set machine type, BIOS (OVMF for UEFI guests), CPU type (host to pass through features) and ballooning if you use it.
  5. Test boot. Boot the VM with serial console or VNC and check guest drivers. Install QEMU guest agent in the guest and enable it in the VM options for clean shutdowns and better backups.

For network configuration, match the VMware port groups and VLAN tagging. I use a simple pattern: a single management bridge vmbr0 for out-of-band management, then additional bridges per physical trunk or per function. On the Proxmox host configure linux bonding if you need NIC redundancy, then attach the bond to the bridge. For VLANs, let the bridge be VLAN-aware and tag on the guest NICs, or use separate bridges per VLAN if you prefer isolation. Keep an annotated diagram of your network mapping. If an app uses a specific MAC or licensing tied to NIC hardware, mirror the MAC in Proxmox to avoid licence resets. Test inter-VM and external connectivity with iperf and ping before switching DNS entries.

Pick a backup pattern and stick to it. I recommend pairing Proxmox with Proxmox Backup Server (PBS) if you can; PBS gives deduplicated, encrypted backups and faster restores. If you cannot use PBS, use vzdump in snapshot mode for consistent backups of KVM guests: vzdump –mode snapshot –compress zstd –storage backup. Apply a simple retention policy: daily incremental or full with a seven-day rotation and a weekly full kept longer offsite. For critical VMs add hourly incremental snapshots. Always verify backups by restoring at least one VM per month to a sandbox host and booting it. That verification step catches application-level problems that a raw file check never will. Label backups with the VMID, date, and Proxmox node so you can trace the source quickly.

Firewall rules get messy unless you standardise. Use the Proxmox host firewall for rules between management networks and the public edge. Enable the firewall on the datacenter level for broad rules, then on host or VM level for fine control. My practical pattern is:

  • Block all ports on management networks except SSH from a bastion.
  • Allow only required service ports on VM interfaces.
  • Use vendor-recommended port ranges for cluster communication and the Proxmox API.
    When you need stateful filtering inside guests, prefer the guest firewall so rules stay with the VM if it moves nodes. Keep rules simple. Test them with nc or nmap from a controlled source. Document every rule as a one-line comment in your change log.

Post-migration checks and pitfalls. Watch licences and vendor behaviour after hardware changes. Some commercial software rebinds licences to hardware IDs and may require vendor reactivation. I had a couple of apps that needed support calls after moving their VM to new underlying hardware. Check monitoring and alerting. Add the Proxmox cluster to your monitoring, and track replication health if you use DRBD or ZFS replicate. Test failover by moving a non-critical VM between nodes and watching cpu, memory and IO metrics. Keep a short runbook with recovery commands: how to start a stopped cluster node, how to remount a lost storage, how to import an exported disk to a new VMID.

Concrete takeaways: make an inventory and a migration checklist. Convert disks with qemu-img and use qm importdisk for Proxmox configuration. Standardise bridges and VLAN mapping in network configuration. Pick PBS or regular vzdump patterns and verify restores. Keep firewall rules minimal and documented. Expect licence issues and budget a day of vendor calls if needed. Follow these steps and the Proxmox migration will be repeatable and auditable rather than a leap of faith.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Prev
Using a Bluetooth universal remote with Moes Fingerbot
img using a bluetooth universal remote with moes fingerbot

Using a Bluetooth universal remote with Moes Fingerbot

If you want a Bluetooth universal remote for a Moes Fingerbot, expect pairing

Next
Keep backups and recovery plans for Proxmox upgrades
img keep backups and recovery plans for proxmox upgrades proxmox package management

Keep backups and recovery plans for Proxmox upgrades

Upgrade your Proxmox server safely with best practices for package management

You May Also Like