I had a Proxmox VM migration leave services unreachable for a few minutes. The VMs had moved hosts cleanly, but traffic still went to the old switch port. That usually means the switch, router, or local neighbour cache is still holding the old MAC or ARP entry.
What you see
Symptoms of VM inaccessibility
- VM pings fail straight after migration.
- SSH or application ports do not respond for a short while.
- ARP shows the wrong MAC for the VM IP on the gateway or host.
Common error messages
- ping: sendmsg: Operation not permitted
- ping: transmit failed. General failure
- ping: sendto: Host is unreachable
- bridge or kernel logs showing unknown neighbour or stale entries.
Timeframes for downtime
- The gap is usually seconds to several minutes.
- If the switch MAC or ARP aging is long, downtime matches that timer.
- Cheap or unmanaged switches can take longer to relearn MAC addresses.
Where it happens
Network switch involvement
- Physical switches learn MACs per port. After a Proxmox VM migration, the VM’s MAC moves to a different host and port. If the switch still maps that MAC to the old port, traffic goes the wrong way.
- Managed switches let you view and clear MAC and ARP tables. Unmanaged switches do not, and they can be slow to adapt.
Impact of ARP table updates
- Hosts and routers cache IP-to-MAC mappings in ARP. If the ARP entry points at the old MAC or is missing, traffic fails until ARP is refreshed.
- Linux hosts also cache neighbours. The command
ip neighshows the kernel ARP/NDP cache.
VM accessibility issues
- On the destination host the VM is running fine locally. Networking fails because upstream devices still send frames to the previous port.
- That makes it look like migration broke the VM. The VM is running, the network path is stale.
Find the cause
Diagnosing switch configurations
-
On a managed switch, list MAC table entries. Cisco example:
show mac address-table | include aa:bb:cc:dd:ee:ffExpected: MAC mapped to the new host port. Actual problem: MAC points to old port.
-
For ARP on an IP router:
show ip arp | include 192.0.2.10Expected: IP mapped to MAC of destination host. Actual: old MAC or no entry.
Checking ARP and MAC table settings
-
On Linux, check ARP and neighbour entries:
ip neigh show | grep 192.0.2.10Example outputs:
- Good:
192.0.2.10 dev vmbr0 lladdr aa:bb:cc:dd:ee:ff REACHABLE - Bad:
192.0.2.10 dev vmbr0 INCOMPLETE - Stale:
192.0.2.10 dev vmbr0 lladdr aa:bb:cc:dd:ee:ff STALE
- Good:
-
Check bridge FDB on the host:
bridge fdb showExpected: VM MAC present on the host where the VM runs. Actual: MAC on other host or missing.
Identifying host migration problems
-
Confirm the VM’s MAC after migration:
ip link show dev vmbr0 virsh domiflist <vmid> # or qm config <vmid> to list MACs -
Confirm the VM process is listening. From the destination host:
ss -tlnp | grep :22 -
If the VM listens locally but the network path fails, the problem is outside Proxmox. That points at switch MAC or ARP caching.
Fix
Steps to clear ARP tables
-
On the Linux hosts:
ip neigh flush all bridge fdb flushOr target a single IP or MAC:
ip neigh del 192.0.2.10 dev vmbr0 bridge fdb del aa:bb:cc:dd:ee:ff dev vmbr0Expected result:
ip neighshows a new entry after traffic;bridge fdbshows the MAC on the correct host. -
On common switches:
- Cisco:
clear mac address-table dynamic address aa:bb:cc:dd:ee:ff clear arp-cache- Other vendors use similar commands. If you cannot clear per-MAC, clear the dynamic table or bounce the switch port.
Forcing ARP updates from the VM or host
-
From the VM or its host send gratuitous ARP:
arping -c 3 -A -I eth0 192.0.2.10Or use:
ip neigh replace 192.0.2.10 lladdr aa:bb:cc:dd:ee:ff dev eth0 nud_reachableExpected: gateway and switch learn the new MAC quickly. Actual: traffic comes back if the switch accepts the gratuitous ARP.
Adjusting switch settings
-
Reduce MAC aging or ARP cache timers on the switch to speed relearning. Typical commands vary by vendor. Example:
- On Cisco switches, set mac-address-table aging to a lower value:
mac-address-table aging-time 120- Match timers to how often you migrate VMs. If live migration is frequent, use a lower timer.
-
On unmanaged or cheap switches, replace the hardware if the problem keeps coming back. Cheap switches can have poor MAC learning.
Testing VM connectivity
-
After clearing entries and sending gratuitous ARP, test:
ping -c 3 192.0.2.10 arp -n | grep 192.0.2.10 ip neigh show 192.0.2.10 bridge fdb show | grep aa:bb:cc:dd:ee:ffExpected: ping success, ARP maps to the correct MAC, bridge FDB lists the MAC on the destination host.
Check it’s fixed
Confirming VM accessibility post-fix
- Run a sequence of checks after a migration:
- Confirm the VM is running on the destination host:
qm status <vmid> - Confirm the VM network interface MAC on that host:
bridge fdb show | grep aa:bb:cc:dd:ee:ff - From a remote node, ping and perform a TCP connect:
nc -vz 192.0.2.10 22
- Confirm the VM is running on the destination host:
Monitoring ongoing performance
-
Watch for repeat occurrences. If downtime repeats on every Proxmox VM migration, the switch is the likely root cause.
-
Automate a small post-migration script on the destination host to send gratuitous ARP and flush local caches:
#!/bin/sh ip neigh flush all arping -c 3 -A -I eth0 $VM_IP bridge fdb flushRun this as a hook after migration.
Document the commands run and the output. Keep the switch MAC and ARP dumps with timestamps. Note the switch model and firmware. If a particular switch model does not relearn MACs cleanly, that is worth writing down before it wastes another afternoon.
Stale MAC or ARP entries on the physical switch or upstream router are the usual cause after Proxmox VM migration. Clearing those entries, sending gratuitous ARP from the VM or host, or reducing aging timers usually fixes the common case.
If the VM still fails after these checks, probe further: capture traffic on the old and new switch ports, confirm VLAN configuration, and check for asymmetric routing.

