// benchmark results -- complete matrix

ESXi VM Migration
16-Point Performance Matrix

naming: esxi_{src}_nfs_{dst}_{nic}_mtu{mtu} -- the folder name is the full test spec, no guessing required
same 32 GB provisioned VM (~19 GB allocated; ~41% sparse) · NFS async · vmkfstools thin provisioning · all 2x2x2x2 combinations measured
units: MB/GB are binary (MB=2^20 bytes, GB=2^30 bytes) · sizes and throughput are computed on allocated bytes unless stated otherwise
instrumented: esxtop SCSI counters (source reads) + iostat (destination writes)
Key Numbers
3.5x
SSD->NVMe 25G vs HDD->HDD 1G
best case vs worst case. proven.
290 MB/s
peak: SSD->NVMe 25G MTU9000
NVMe sitting at 0.2% utilisation. bored.
+0.4 MB/s
NVMe vs HDD destination on 1G
not a typo. the 1G wall doesn't care.
+4%
MTU 9000 vs 1500, everywhere
free. consistent. zero excuses not to.
24x
NFS sync penalty -- measured
SSD->NVMe 25G: 290->12 MB/s
one line in /etc/exports. don't.
Full 16-Test Matrix
Source
disk
Dest
disk
NIC
speed
MTU
bytes
Duration
m:ss
Throughput
MB/s
vs Baseline
speed ratio
Src Read
IOPS
Src Read
MB/s
Dst Write
IOPS
Dst Write
MB/s
HDDHDD1G15003m 56s82.4baseline2,2061381,21189
HDDHDD1G90003m 47s85.71.0x2,3111451,23092
HDDHDD1G1500 SYNC2h 49m 58s6.50.1x 28718--14
HDDNVMe1G15003m 54s82.91.0x2,2041381,43490
HDDNVMe1G90003m 46s85.81.0x2,3041441,46892
HDDHDD25G15002m 56s110.21.3x2,955185414120
HDDHDD25G90002m 49s114.51.4x3,044190443128
HDDHDD25G1500 SYNC2h 49m 49s6.50.1x 27817--9
HDDNVMe25G15002m 38s122.91.5x3,2382022,157138
HDDNVMe25G90002m 32s127.41.5x3,3492092,237143
SSDHDD1G15003m 18s98.11.2x2,6041631,375101
SSDHDD1G90003m 09s102.61.2x2,7511721,423107
SSDNVMe1G15003m 17s98.61.2x2,5941621,617101
SSDNVMe1G90003m 08s103.01.2x2,7401711,707107
SSDHDD25G15002m 05s155.11.9x4,175261249160
SSDHDD25G90002m 03s157.41.9x4,184262247160
SSDNVMe25G15001m 08s285.63.5x7,1644484,724296
SSDNVMe25G90001m 07s289.73.5x7,6934814,712296
SSDNVMe25G9000 SYNC26m 21s12.30.1x 33821--17
⚠ SYNC rows -- NFS sync mode forces the server to confirm each write to persistent disk before the client can continue. Penalty grows with faster hardware: 13x on HDD->HDD 1G · 17x on HDD->HDD 25G · 24x on SSD->NVMe 25G. You built a 290 MB/s pipeline. One config option turns it into 12 MB/s. The faster your stack, the more you lose. Fix: add async to /etc/exports, run exportfs -ra. Verify with exportfs -v | grep async.
Throughput at a Glance
sorted by MB/s · bar width proportional to peak (289.7 MB/s)
SSD->NVMe 25G MTU9000
290 MB/s
SSD->NVMe 25G MTU1500
286 MB/s
SSD->HDD 25G MTU9000
157 MB/s
SSD->HDD 25G MTU1500
155 MB/s
HDD->NVMe 25G MTU9000
127 MB/s
HDD->NVMe 25G MTU1500
123 MB/s
HDD->HDD 25G MTU9000
115 MB/s
HDD->HDD 25G MTU1500
110 MB/s
SSD->NVMe 1G MTU9000
103 MB/s
SSD->HDD 1G MTU9000
103 MB/s
SSD->NVMe 1G MTU1500
99 MB/s
SSD->HDD 1G MTU1500
98 MB/s
HDD->NVMe 1G MTU9000
86 MB/s
HDD->HDD 1G MTU9000
86 MB/s
HDD->NVMe 1G MTU1500
83 MB/s
HDD->HDD 1G MTU1500 ← baseline
82 MB/s
⚠ NFS sync mode -- same hardware, async disabled
SSD->NVMe 25G MTU9000 SYNC
12.3 MB/s
HDD->HDD 25G MTU1500 SYNC
6.5 MB/s
HDD->HDD 1G MTU1500 SYNC
6.5 MB/s
Isolated Variable Effects -- Hard Evidence, No Assumptions
MTU 1500 -> 9000
Squeezing the lemon. Consistent +3-4 MB/s absolute gain across every single combination.

The % looks better on slower setups -- not because the lemon got juicier, just because the glass was smaller. The absolute improvement is remarkably flat. It's free. Set it on all three hops. There is no reason not to.
HDD->HDD 1G+3.3 MB/s +4.0%
HDD->NVMe 1G+3.0 MB/s +3.6%
SSD->HDD 1G+4.4 MB/s +4.5%
SSD->NVMe 1G+4.4 MB/s +4.5%
HDD->HDD 25G+4.4 MB/s +4.0%
HDD->NVMe 25G+4.5 MB/s +3.6%
SSD->HDD 25G+2.2 MB/s +1.4%
SSD->NVMe 25G+4.2 MB/s +1.5%
HDD -> NVMe destination
On 1G: an expensive paperweight. The 1G ceiling doesn't care what's behind it, and neither should you.

On 25G + SSD source: +130 MB/s, +84%. Completely different story. NVMe only matters when both the network and source can actually feed it. Upgrade order: network first, then storage.
HDD src, 1G, MTU1500+0.4 MB/s +0.5%
HDD src, 1G, MTU9000+0.1 MB/s +0.1%
SSD src, 1G, MTU1500+0.5 MB/s +0.5%
SSD src, 1G, MTU9000+0.4 MB/s +0.4%
HDD src, 25G, MTU1500+12.7 MB/s +11.5%
HDD src, 25G, MTU9000+12.8 MB/s +11.2%
SSD src, 25G, MTU1500+130 MB/s +84%
SSD src, 25G, MTU9000+132 MB/s +84%
1G -> 25G network
With HDD source, 25G buys you ~30%. Fine. With SSD+NVMe, 25G is a 3x multiplier. The bigger the pipe, the more the storage quality matters -- there's nothing left to hide behind.
HDD->HDD MTU150082->110 MB/s 1.3x
HDD->HDD MTU900086->115 MB/s 1.3x
HDD->NVMe MTU150083->123 MB/s 1.5x
HDD->NVMe MTU900086->127 MB/s 1.5x
SSD->HDD MTU150098->155 MB/s 1.6x
SSD->HDD MTU9000103->157 MB/s 1.5x
SSD->NVMe MTU150099->286 MB/s 2.9x
SSD->NVMe MTU9000103->290 MB/s 2.8x
HDD -> SSD source
On 1G: +20%, $350. The SSD's latency advantage (1.5ms vs HDD's 9ms) is completely swamped by the network. On 25G+NVMe: 2.3x. The latency difference finally has room to matter. You bought the right tool -- you just needed the right pipe first.
->HDD 1G MTU150082->98 MB/s 1.2x
->HDD 1G MTU900086->103 MB/s 1.2x
->NVMe 1G MTU150083->99 MB/s 1.2x
->NVMe 1G MTU900086->103 MB/s 1.2x
->HDD 25G MTU1500110->155 MB/s 1.4x
->HDD 25G MTU9000115->157 MB/s 1.4x
->NVMe 25G MTU1500123->286 MB/s 2.3x
->NVMe 25G MTU9000127->290 MB/s 2.3x
Raw IOPS -- The Nitty Gritty

// Source reads -- ESXi esxtop SCSI counters

HDD / 1G MTU15002,206 · 138 MB/s
HDD / 1G MTU90002,311 · 145 MB/s
HDD / 25G->HDD MTU15002,955 · 185 MB/s
HDD / 25G->HDD MTU90003,044 · 190 MB/s
HDD / 25G->NVMe MTU15003,238 · 202 MB/s
HDD / 25G->NVMe MTU90003,349 · 209 MB/s
SSD / 1G MTU15002,604 · 163 MB/s
SSD / 1G MTU90002,751 · 172 MB/s
SSD / 25G->HDD MTU15004,175 · 261 MB/s
SSD / 25G->HDD MTU90004,184 · 262 MB/s
SSD / 25G->NVMe MTU15007,164 · 448 MB/s
SSD / 25G->NVMe MTU90007,693 · 481 MB/s

// Destination writes -- Linux iostat

HDD / 1G (HDD src)1,211 · 89 MB/s · 2.0% util
HDD / 1G (SSD src)1,375 · 101 MB/s · 2.2% util
HDD / 25G (HDD src)414-443 · 120-128 MB/s · 5-6%
HDD / 25G (SSD src)247-249 · 160 MB/s · ~7%
NVMe / 1G (HDD src)1,434-1,468 · 90-92 MB/s · 0.0%
NVMe / 1G (SSD src)1,617-1,707 · 101-107 MB/s · 0.1%
NVMe / 25G (HDD src)2,157-2,237 · 138-143 MB/s · 0.1%
NVMe / 25G (SSD src)4,712-4,724 · 296 MB/s · 0.2%
NVMe at 0.2% utilisation. 4,700+ IOPS. 296 MB/s written.
It could run this workload 500 times over simultaneously.
It is not the bottleneck. It will never be the bottleneck.
It is simply waiting, with the patience of solid state matter,
for someone to send it more data.
What This Actually Proves
The Villain: NFS Sync Mode
Measured. Not estimated. Not theoretical.
HDD->HDD 1G: 6.5 MB/s. 13x slower than async.
HDD->HDD 25G: 6.5 MB/s. 17x slower. The NIC doesn't matter.
SSD->NVMe 25G: 12.3 MB/s. 24x slower.
One line. async. Then exportfs -ra.
MTU 9000: The Consistent Free Lunch
+3-4 MB/s absolute gain, every combination, no exceptions.
~4% on 1G. ~1.5% on 25G+NVMe.
The denominator changed. The lemon did not.
Set it on ESXi vmkernel, Linux interface, and any switch in path.
Verify: ping -s 8972 -M do <target>
Destination Disk: Context Is Everything
On 1G: NVMe vs HDD = +0.4 MB/s. Noise.
The 1G network is the wall. Nothing behind it matters.
On 25G + SSD source: +130 MB/s. Completely different story.
Lesson: don't upgrade the destination in isolation.
Upgrade the full stack or don't bother.
Upgrades Multiply, Not Add
Each layer unlocks the next.
async -> network -> source -> destination.
All four together: 82 -> 290 MB/s. 3.5x. Proven.
Skip one: you leave the corresponding gain on the table.
This is not theory. These are 16 data points.