summary refs log tree commit diff stats
path: root/results/classifier/gemma3:12b/performance/719
diff options
context:
space:
mode:
authorChristian Krinitsin <mail@krinitsin.com>2025-07-03 07:27:52 +0000
committerChristian Krinitsin <mail@krinitsin.com>2025-07-03 07:27:52 +0000
commitd0c85e36e4de67af628d54e9ab577cc3fad7796a (patch)
treef8f784b0f04343b90516a338d6df81df3a85dfa2 /results/classifier/gemma3:12b/performance/719
parent7f4364274750eb8cb39a3e7493132fca1c01232e (diff)
downloadqemu-analysis-d0c85e36e4de67af628d54e9ab577cc3fad7796a.tar.gz
qemu-analysis-d0c85e36e4de67af628d54e9ab577cc3fad7796a.zip
add deepseek and gemma results
Diffstat (limited to 'results/classifier/gemma3:12b/performance/719')
-rw-r--r--results/classifier/gemma3:12b/performance/71920
1 files changed, 20 insertions, 0 deletions
diff --git a/results/classifier/gemma3:12b/performance/719 b/results/classifier/gemma3:12b/performance/719
new file mode 100644
index 000000000..8e525eeda
--- /dev/null
+++ b/results/classifier/gemma3:12b/performance/719
@@ -0,0 +1,20 @@
+
+live migration's performance with compression enabled is much worse than compression disabled
+Description of problem:
+
+Steps to reproduce:
+1. Run QEMU the Guests with 1Gpbs network on source host and destination host with QEMU command line
+2. Run some memory work loads on Guest, for example, ./memtester 1G 1
+3. Set migration parameters in QEMU monitor. On source and destination, 
+   execute: #migrate_set_capability compress on
+   Other compression parameters are all default. 
+4. Run migrate command, # migrate -d tcp:10.156.208.154:4000
+5. The results: 
+   - without compression:  total time:  197366 ms   throughput:   937.81 mbps  transferred Ram: 22593703 kbytes 
+   - with compression: total time:  281711 ms   throughput:  90.24 mbps    transferred Ram: 3102898 kbytes  
+
+When compression is enabled, the compression transferred ram is reduced a lot. But the throughput is down badly.
+The total time of live migration with compression is longer than without compression. 
+I tried with 100G network bandwidth, it also has the same problem.
+Additional information:
+