summary refs log tree commit diff stats
path: root/results/classifier/118/graphic/1889
diff options
context:
space:
mode:
Diffstat (limited to 'results/classifier/118/graphic/1889')
-rw-r--r--results/classifier/118/graphic/188977
1 files changed, 77 insertions, 0 deletions
diff --git a/results/classifier/118/graphic/1889 b/results/classifier/118/graphic/1889
new file mode 100644
index 000000000..f7ccd666c
--- /dev/null
+++ b/results/classifier/118/graphic/1889
@@ -0,0 +1,77 @@
+graphic: 0.925
+performance: 0.898
+boot: 0.838
+socket: 0.768
+device: 0.752
+PID: 0.705
+architecture: 0.647
+vnc: 0.595
+virtual: 0.570
+kernel: 0.556
+semantic: 0.537
+x86: 0.534
+network: 0.531
+peripherals: 0.523
+hypervisor: 0.516
+ppc: 0.513
+risc-v: 0.452
+i386: 0.445
+VMM: 0.434
+arm: 0.396
+permissions: 0.379
+user-level: 0.352
+register: 0.319
+files: 0.317
+debug: 0.311
+assembly: 0.300
+TCG: 0.282
+mistranslation: 0.249
+KVM: 0.204
+
+IO delays on live migration lv initialization
+Description of problem:
+Hi,
+
+When I live migrate a VM via Proxmox and the destination is an LVM thin pool I see that at the start of copying the disk it's first initialized.
+
+This leads the thin volume to be directly 100% allocated which needs to be discarded afterwards. Not ideal but ....
+
+The more annoying thing is that this initialization step used 100% of disk IO. In iotop I see it writing over 1000MB/sec. The nasty side effect is that other VM's on that host are negatively affected. It's not completely locked up, I can ssh in and look around, but storage intensive things see more delay. With e.g. http requests timing out. And even a simple ls command could take 10+ seconds which is normally instant.
+
+
+I've previously reported it on the [proxmox forum](https://forum.proxmox.com/threads/io-delays-on-live-migration-lv-initialization.132296/#post-582050) but the call was made that this is behavior from Qemu.
+
+> The zeroing happens, because of what QEMU does when the discard option is enabled:
+
+
+When I disable discard for the VM disk I can see that it's not pre-initialized during migration, but not having that defeats the purpose of having an lvm thin pool.
+
+For the (disk) migration itself I can set a bandwidth limit ... could we do something similar for initialization?
+
+
+Even better would be to not initialize at all when using LVM thin. As far as I understand it the new blocks allocated by lvm thin should always be empty.
+Steps to reproduce:
+1. Migrate a vm with a large disk
+2. look in iotop on the new host, would be see more write IO then the network could handle.. just before the disk content is transferred.
+3. look in another VM on the destination host, reading from disk would be significantly slower then normal.
+Additional information:
+An example VM config
+```
+agent: 1,fstrim_cloned_disks=1
+balloon: 512
+bootdisk: scsi0
+cores: 6
+ide2: none,media=cdrom
+memory: 8196
+name: ...
+net0: virtio=...,bridge=...
+numa: 0
+onboot: 1
+ostype: l26
+scsi0: thin_pool_hwraid:vm-301-disk-0,discard=on,format=raw,size=16192M
+scsi1: thin_pool_hwraid:vm-301-disk-1,discard=on,format=raw,size=26G
+scsihw: virtio-scsi-pci
+serial0: socket
+smbios1: uuid=...
+sockets: 1
+```