diff options
| author | Christian Krinitsin <mail@krinitsin.com> | 2025-07-03 19:39:53 +0200 |
|---|---|---|
| committer | Christian Krinitsin <mail@krinitsin.com> | 2025-07-03 19:39:53 +0200 |
| commit | dee4dcba78baf712cab403d47d9db319ab7f95d6 (patch) | |
| tree | 418478faf06786701a56268672f73d6b0b4eb239 /results/classifier/013/user-level | |
| parent | 4d9e26c0333abd39bdbd039dcdb30ed429c475ba (diff) | |
| download | qemu-analysis-dee4dcba78baf712cab403d47d9db319ab7f95d6.tar.gz qemu-analysis-dee4dcba78baf712cab403d47d9db319ab7f95d6.zip | |
restructure results
Diffstat (limited to 'results/classifier/013/user-level')
| -rw-r--r-- | results/classifier/013/user-level/23270873 | 720 | ||||
| -rw-r--r-- | results/classifier/013/user-level/28596630 | 141 | ||||
| -rw-r--r-- | results/classifier/013/user-level/56309929 | 208 | ||||
| -rw-r--r-- | results/classifier/013/user-level/71456293 | 1514 | ||||
| -rw-r--r-- | results/classifier/013/user-level/74466963 | 1906 | ||||
| -rw-r--r-- | results/classifier/013/user-level/80615920 | 376 |
6 files changed, 0 insertions, 4865 deletions
diff --git a/results/classifier/013/user-level/23270873 b/results/classifier/013/user-level/23270873 deleted file mode 100644 index 26df90e10..000000000 --- a/results/classifier/013/user-level/23270873 +++ /dev/null @@ -1,720 +0,0 @@ -user-level: 0.896 -mistranslation: 0.881 -risc-v: 0.859 -operating system: 0.844 -boot: 0.830 -TCG: 0.828 -ppc: 0.827 -vnc: 0.820 -peripherals: 0.820 -device: 0.810 -hypervisor: 0.806 -KVM: 0.803 -virtual: 0.802 -permissions: 0.802 -register: 0.797 -VMM: 0.792 -debug: 0.788 -assembly: 0.768 -network: 0.768 -graphic: 0.764 -arm: 0.761 -socket: 0.758 -semantic: 0.752 -system: 0.748 -performance: 0.744 -architecture: 0.742 -kernel: 0.735 -PID: 0.731 -x86: 0.730 -files: 0.730 -alpha: 0.712 -i386: 0.705 - -[Qemu-devel] [BUG?] aio_get_linux_aio: Assertion `ctx->linux_aio' failed - -Hi, - -I am seeing some strange QEMU assertion failures for qemu on s390x, -which prevents a guest from starting. - -Git bisecting points to the following commit as the source of the error. - -commit ed6e2161715c527330f936d44af4c547f25f687e -Author: Nishanth Aravamudan <address@hidden> -Date: Fri Jun 22 12:37:00 2018 -0700 - - linux-aio: properly bubble up errors from initialization - - laio_init() can fail for a couple of reasons, which will lead to a NULL - pointer dereference in laio_attach_aio_context(). - - To solve this, add a aio_setup_linux_aio() function which is called - early in raw_open_common. If this fails, propagate the error up. The - signature of aio_get_linux_aio() was not modified, because it seems - preferable to return the actual errno from the possible failing - initialization calls. - - Additionally, when the AioContext changes, we need to associate a - LinuxAioState with the new AioContext. Use the bdrv_attach_aio_context - callback and call the new aio_setup_linux_aio(), which will allocate a -new AioContext if needed, and return errors on failures. If it -fails for -any reason, fallback to threaded AIO with an error message, as the - device is already in-use by the guest. - - Add an assert that aio_get_linux_aio() cannot return NULL. - - Signed-off-by: Nishanth Aravamudan <address@hidden> - Message-id: address@hidden - Signed-off-by: Stefan Hajnoczi <address@hidden> -Not sure what is causing this assertion to fail. Here is the qemu -command line of the guest, from qemu log, which throws this error: -LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin -QEMU_AUDIO_DRV=none /usr/local/bin/qemu-system-s390x -name -guest=rt_vm1,debug-threads=on -S -object -secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-21-rt_vm1/master-key.aes --machine s390-ccw-virtio-2.12,accel=kvm,usb=off,dump-guest-core=off -m -1024 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -object -iothread,id=iothread1 -uuid 0cde16cd-091d-41bd-9ac2-5243df5c9a0d --display none -no-user-config -nodefaults -chardev -socket,id=charmonitor,fd=28,server,nowait -mon -chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown --boot strict=on -drive -file=/dev/mapper/360050763998b0883980000002a000031,format=raw,if=none,id=drive-virtio-disk0,cache=none,aio=native --device -virtio-blk-ccw,iothread=iothread1,scsi=off,devno=fe.0.0001,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on --netdev tap,fd=30,id=hostnet0,vhost=on,vhostfd=31 -device -virtio-net-ccw,netdev=hostnet0,id=net0,mac=02:3a:c8:67:95:84,devno=fe.0.0000 --netdev tap,fd=32,id=hostnet1,vhost=on,vhostfd=33 -device -virtio-net-ccw,netdev=hostnet1,id=net1,mac=52:54:00:2a:e5:08,devno=fe.0.0002 --chardev pty,id=charconsole0 -device -sclpconsole,chardev=charconsole0,id=console0 -device -virtio-balloon-ccw,id=balloon0,devno=fe.3.ffba -sandbox -on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny --msg timestamp=on -2018-07-17 15:48:42.252+0000: Domain id=21 is tainted: high-privileges -2018-07-17T15:48:42.279380Z qemu-system-s390x: -chardev -pty,id=charconsole0: char device redirected to /dev/pts/3 (label -charconsole0) -qemu-system-s390x: util/async.c:339: aio_get_linux_aio: Assertion -`ctx->linux_aio' failed. -2018-07-17 15:48:43.309+0000: shutting down, reason=failed - - -Any help debugging this would be greatly appreciated. - -Thank you -Farhan - -On 17.07.2018 [13:25:53 -0400], Farhan Ali wrote: -> -Hi, -> -> -I am seeing some strange QEMU assertion failures for qemu on s390x, -> -which prevents a guest from starting. -> -> -Git bisecting points to the following commit as the source of the error. -> -> -commit ed6e2161715c527330f936d44af4c547f25f687e -> -Author: Nishanth Aravamudan <address@hidden> -> -Date: Fri Jun 22 12:37:00 2018 -0700 -> -> -linux-aio: properly bubble up errors from initialization -> -> -laio_init() can fail for a couple of reasons, which will lead to a NULL -> -pointer dereference in laio_attach_aio_context(). -> -> -To solve this, add a aio_setup_linux_aio() function which is called -> -early in raw_open_common. If this fails, propagate the error up. The -> -signature of aio_get_linux_aio() was not modified, because it seems -> -preferable to return the actual errno from the possible failing -> -initialization calls. -> -> -Additionally, when the AioContext changes, we need to associate a -> -LinuxAioState with the new AioContext. Use the bdrv_attach_aio_context -> -callback and call the new aio_setup_linux_aio(), which will allocate a -> -new AioContext if needed, and return errors on failures. If it fails for -> -any reason, fallback to threaded AIO with an error message, as the -> -device is already in-use by the guest. -> -> -Add an assert that aio_get_linux_aio() cannot return NULL. -> -> -Signed-off-by: Nishanth Aravamudan <address@hidden> -> -Message-id: address@hidden -> -Signed-off-by: Stefan Hajnoczi <address@hidden> -> -> -> -Not sure what is causing this assertion to fail. Here is the qemu command -> -line of the guest, from qemu log, which throws this error: -> -> -> -LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin -> -QEMU_AUDIO_DRV=none /usr/local/bin/qemu-system-s390x -name -> -guest=rt_vm1,debug-threads=on -S -object -> -secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-21-rt_vm1/master-key.aes -> --machine s390-ccw-virtio-2.12,accel=kvm,usb=off,dump-guest-core=off -m 1024 -> --realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -object -> -iothread,id=iothread1 -uuid 0cde16cd-091d-41bd-9ac2-5243df5c9a0d -display -> -none -no-user-config -nodefaults -chardev -> -socket,id=charmonitor,fd=28,server,nowait -mon -> -chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot -> -strict=on -drive -> -file=/dev/mapper/360050763998b0883980000002a000031,format=raw,if=none,id=drive-virtio-disk0,cache=none,aio=native -> --device -> -virtio-blk-ccw,iothread=iothread1,scsi=off,devno=fe.0.0001,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on -> --netdev tap,fd=30,id=hostnet0,vhost=on,vhostfd=31 -device -> -virtio-net-ccw,netdev=hostnet0,id=net0,mac=02:3a:c8:67:95:84,devno=fe.0.0000 -> --netdev tap,fd=32,id=hostnet1,vhost=on,vhostfd=33 -device -> -virtio-net-ccw,netdev=hostnet1,id=net1,mac=52:54:00:2a:e5:08,devno=fe.0.0002 -> --chardev pty,id=charconsole0 -device -> -sclpconsole,chardev=charconsole0,id=console0 -device -> -virtio-balloon-ccw,id=balloon0,devno=fe.3.ffba -sandbox -> -on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg -> -timestamp=on -> -> -> -> -2018-07-17 15:48:42.252+0000: Domain id=21 is tainted: high-privileges -> -2018-07-17T15:48:42.279380Z qemu-system-s390x: -chardev pty,id=charconsole0: -> -char device redirected to /dev/pts/3 (label charconsole0) -> -qemu-system-s390x: util/async.c:339: aio_get_linux_aio: Assertion -> -`ctx->linux_aio' failed. -> -2018-07-17 15:48:43.309+0000: shutting down, reason=failed -> -> -> -Any help debugging this would be greatly appreciated. -iiuc, this possibly implies AIO was not actually used previously on this -guest (it might have silently been falling back to threaded IO?). I -don't have access to s390x, but would it be possible to run qemu under -gdb and see if aio_setup_linux_aio is being called at all (I think it -might not be, but I'm not sure why), and if so, if it's for the context -in question? - -If it's not being called first, could you see what callpath is calling -aio_get_linux_aio when this assertion trips? - -Thanks! --Nish - -On 07/17/2018 04:52 PM, Nishanth Aravamudan wrote: -iiuc, this possibly implies AIO was not actually used previously on this -guest (it might have silently been falling back to threaded IO?). I -don't have access to s390x, but would it be possible to run qemu under -gdb and see if aio_setup_linux_aio is being called at all (I think it -might not be, but I'm not sure why), and if so, if it's for the context -in question? - -If it's not being called first, could you see what callpath is calling -aio_get_linux_aio when this assertion trips? - -Thanks! --Nish -Hi Nishant, -From the coredump of the guest this is the call trace that calls -aio_get_linux_aio: -Stack trace of thread 145158: -#0 0x000003ff94dbe274 raise (libc.so.6) -#1 0x000003ff94da39a8 abort (libc.so.6) -#2 0x000003ff94db62ce __assert_fail_base (libc.so.6) -#3 0x000003ff94db634c __assert_fail (libc.so.6) -#4 0x000002aa20db067a aio_get_linux_aio (qemu-system-s390x) -#5 0x000002aa20d229a8 raw_aio_plug (qemu-system-s390x) -#6 0x000002aa20d309ee bdrv_io_plug (qemu-system-s390x) -#7 0x000002aa20b5a8ea virtio_blk_handle_vq (qemu-system-s390x) -#8 0x000002aa20db2f6e aio_dispatch_handlers (qemu-system-s390x) -#9 0x000002aa20db3c34 aio_poll (qemu-system-s390x) -#10 0x000002aa20be32a2 iothread_run (qemu-system-s390x) -#11 0x000003ff94f879a8 start_thread (libpthread.so.0) -#12 0x000003ff94e797ee thread_start (libc.so.6) - - -Thanks for taking a look and responding. - -Thanks -Farhan - -On 07/18/2018 09:42 AM, Farhan Ali wrote: -On 07/17/2018 04:52 PM, Nishanth Aravamudan wrote: -iiuc, this possibly implies AIO was not actually used previously on this -guest (it might have silently been falling back to threaded IO?). I -don't have access to s390x, but would it be possible to run qemu under -gdb and see if aio_setup_linux_aio is being called at all (I think it -might not be, but I'm not sure why), and if so, if it's for the context -in question? - -If it's not being called first, could you see what callpath is calling -aio_get_linux_aio when this assertion trips? - -Thanks! --Nish -Hi Nishant, -From the coredump of the guest this is the call trace that calls -aio_get_linux_aio: -Stack trace of thread 145158: -#0 0x000003ff94dbe274 raise (libc.so.6) -#1 0x000003ff94da39a8 abort (libc.so.6) -#2 0x000003ff94db62ce __assert_fail_base (libc.so.6) -#3 0x000003ff94db634c __assert_fail (libc.so.6) -#4 0x000002aa20db067a aio_get_linux_aio (qemu-system-s390x) -#5 0x000002aa20d229a8 raw_aio_plug (qemu-system-s390x) -#6 0x000002aa20d309ee bdrv_io_plug (qemu-system-s390x) -#7 0x000002aa20b5a8ea virtio_blk_handle_vq (qemu-system-s390x) -#8 0x000002aa20db2f6e aio_dispatch_handlers (qemu-system-s390x) -#9 0x000002aa20db3c34 aio_poll (qemu-system-s390x) -#10 0x000002aa20be32a2 iothread_run (qemu-system-s390x) -#11 0x000003ff94f879a8 start_thread (libpthread.so.0) -#12 0x000003ff94e797ee thread_start (libc.so.6) - - -Thanks for taking a look and responding. - -Thanks -Farhan -Trying to debug a little further, the block device in this case is a -"host device". And looking at your commit carefully you use the -bdrv_attach_aio_context callback to setup a Linux AioContext. -For some reason the "host device" struct (BlockDriver bdrv_host_device -in block/file-posix.c) does not have a bdrv_attach_aio_context defined. -So a simple change of adding the callback to the struct solves the issue -and the guest starts fine. -diff --git a/block/file-posix.c b/block/file-posix.c -index 28824aa..b8d59fb 100644 ---- a/block/file-posix.c -+++ b/block/file-posix.c -@@ -3135,6 +3135,7 @@ static BlockDriver bdrv_host_device = { - .bdrv_refresh_limits = raw_refresh_limits, - .bdrv_io_plug = raw_aio_plug, - .bdrv_io_unplug = raw_aio_unplug, -+ .bdrv_attach_aio_context = raw_aio_attach_aio_context, - - .bdrv_co_truncate = raw_co_truncate, - .bdrv_getlength = raw_getlength, -I am not too familiar with block device code in QEMU, so not sure if -this is the right fix or if there are some underlying problems. -Thanks -Farhan - -On 18.07.2018 [11:10:27 -0400], Farhan Ali wrote: -> -> -> -On 07/18/2018 09:42 AM, Farhan Ali wrote: -> -> -> -> -> -> On 07/17/2018 04:52 PM, Nishanth Aravamudan wrote: -> -> > iiuc, this possibly implies AIO was not actually used previously on this -> -> > guest (it might have silently been falling back to threaded IO?). I -> -> > don't have access to s390x, but would it be possible to run qemu under -> -> > gdb and see if aio_setup_linux_aio is being called at all (I think it -> -> > might not be, but I'm not sure why), and if so, if it's for the context -> -> > in question? -> -> > -> -> > If it's not being called first, could you see what callpath is calling -> -> > aio_get_linux_aio when this assertion trips? -> -> > -> -> > Thanks! -> -> > -Nish -> -> -> -> -> -> Hi Nishant, -> -> -> -> From the coredump of the guest this is the call trace that calls -> -> aio_get_linux_aio: -> -> -> -> -> -> Stack trace of thread 145158: -> -> #0 0x000003ff94dbe274 raise (libc.so.6) -> -> #1 0x000003ff94da39a8 abort (libc.so.6) -> -> #2 0x000003ff94db62ce __assert_fail_base (libc.so.6) -> -> #3 0x000003ff94db634c __assert_fail (libc.so.6) -> -> #4 0x000002aa20db067a aio_get_linux_aio (qemu-system-s390x) -> -> #5 0x000002aa20d229a8 raw_aio_plug (qemu-system-s390x) -> -> #6 0x000002aa20d309ee bdrv_io_plug (qemu-system-s390x) -> -> #7 0x000002aa20b5a8ea virtio_blk_handle_vq (qemu-system-s390x) -> -> #8 0x000002aa20db2f6e aio_dispatch_handlers (qemu-system-s390x) -> -> #9 0x000002aa20db3c34 aio_poll (qemu-system-s390x) -> -> #10 0x000002aa20be32a2 iothread_run (qemu-system-s390x) -> -> #11 0x000003ff94f879a8 start_thread (libpthread.so.0) -> -> #12 0x000003ff94e797ee thread_start (libc.so.6) -> -> -> -> -> -> Thanks for taking a look and responding. -> -> -> -> Thanks -> -> Farhan -> -> -> -> -> -> -> -> -Trying to debug a little further, the block device in this case is a "host -> -device". And looking at your commit carefully you use the -> -bdrv_attach_aio_context callback to setup a Linux AioContext. -> -> -For some reason the "host device" struct (BlockDriver bdrv_host_device in -> -block/file-posix.c) does not have a bdrv_attach_aio_context defined. -> -So a simple change of adding the callback to the struct solves the issue and -> -the guest starts fine. -> -> -> -diff --git a/block/file-posix.c b/block/file-posix.c -> -index 28824aa..b8d59fb 100644 -> ---- a/block/file-posix.c -> -+++ b/block/file-posix.c -> -@@ -3135,6 +3135,7 @@ static BlockDriver bdrv_host_device = { -> -.bdrv_refresh_limits = raw_refresh_limits, -> -.bdrv_io_plug = raw_aio_plug, -> -.bdrv_io_unplug = raw_aio_unplug, -> -+ .bdrv_attach_aio_context = raw_aio_attach_aio_context, -> -> -.bdrv_co_truncate = raw_co_truncate, -> -.bdrv_getlength = raw_getlength, -> -> -> -> -I am not too familiar with block device code in QEMU, so not sure if -> -this is the right fix or if there are some underlying problems. -Oh this is quite embarassing! I only added the bdrv_attach_aio_context -callback for the file-backed device. Your fix is definitely corect for -host device. Let me make sure there weren't any others missed and I will -send out a properly formatted patch. Thank you for the quick testing and -turnaround! - --Nish - -On 07/18/2018 08:52 PM, Nishanth Aravamudan wrote: -> -On 18.07.2018 [11:10:27 -0400], Farhan Ali wrote: -> -> -> -> -> -> On 07/18/2018 09:42 AM, Farhan Ali wrote: -> ->> -> ->> -> ->> On 07/17/2018 04:52 PM, Nishanth Aravamudan wrote: -> ->>> iiuc, this possibly implies AIO was not actually used previously on this -> ->>> guest (it might have silently been falling back to threaded IO?). I -> ->>> don't have access to s390x, but would it be possible to run qemu under -> ->>> gdb and see if aio_setup_linux_aio is being called at all (I think it -> ->>> might not be, but I'm not sure why), and if so, if it's for the context -> ->>> in question? -> ->>> -> ->>> If it's not being called first, could you see what callpath is calling -> ->>> aio_get_linux_aio when this assertion trips? -> ->>> -> ->>> Thanks! -> ->>> -Nish -> ->> -> ->> -> ->> Hi Nishant, -> ->> -> ->> From the coredump of the guest this is the call trace that calls -> ->> aio_get_linux_aio: -> ->> -> ->> -> ->> Stack trace of thread 145158: -> ->> #0 0x000003ff94dbe274 raise (libc.so.6) -> ->> #1 0x000003ff94da39a8 abort (libc.so.6) -> ->> #2 0x000003ff94db62ce __assert_fail_base (libc.so.6) -> ->> #3 0x000003ff94db634c __assert_fail (libc.so.6) -> ->> #4 0x000002aa20db067a aio_get_linux_aio (qemu-system-s390x) -> ->> #5 0x000002aa20d229a8 raw_aio_plug (qemu-system-s390x) -> ->> #6 0x000002aa20d309ee bdrv_io_plug (qemu-system-s390x) -> ->> #7 0x000002aa20b5a8ea virtio_blk_handle_vq (qemu-system-s390x) -> ->> #8 0x000002aa20db2f6e aio_dispatch_handlers (qemu-system-s390x) -> ->> #9 0x000002aa20db3c34 aio_poll (qemu-system-s390x) -> ->> #10 0x000002aa20be32a2 iothread_run (qemu-system-s390x) -> ->> #11 0x000003ff94f879a8 start_thread (libpthread.so.0) -> ->> #12 0x000003ff94e797ee thread_start (libc.so.6) -> ->> -> ->> -> ->> Thanks for taking a look and responding. -> ->> -> ->> Thanks -> ->> Farhan -> ->> -> ->> -> ->> -> -> -> -> Trying to debug a little further, the block device in this case is a "host -> -> device". And looking at your commit carefully you use the -> -> bdrv_attach_aio_context callback to setup a Linux AioContext. -> -> -> -> For some reason the "host device" struct (BlockDriver bdrv_host_device in -> -> block/file-posix.c) does not have a bdrv_attach_aio_context defined. -> -> So a simple change of adding the callback to the struct solves the issue and -> -> the guest starts fine. -> -> -> -> -> -> diff --git a/block/file-posix.c b/block/file-posix.c -> -> index 28824aa..b8d59fb 100644 -> -> --- a/block/file-posix.c -> -> +++ b/block/file-posix.c -> -> @@ -3135,6 +3135,7 @@ static BlockDriver bdrv_host_device = { -> -> .bdrv_refresh_limits = raw_refresh_limits, -> -> .bdrv_io_plug = raw_aio_plug, -> -> .bdrv_io_unplug = raw_aio_unplug, -> -> + .bdrv_attach_aio_context = raw_aio_attach_aio_context, -> -> -> -> .bdrv_co_truncate = raw_co_truncate, -> -> .bdrv_getlength = raw_getlength, -> -> -> -> -> -> -> -> I am not too familiar with block device code in QEMU, so not sure if -> -> this is the right fix or if there are some underlying problems. -> -> -Oh this is quite embarassing! I only added the bdrv_attach_aio_context -> -callback for the file-backed device. Your fix is definitely corect for -> -host device. Let me make sure there weren't any others missed and I will -> -send out a properly formatted patch. Thank you for the quick testing and -> -turnaround! -Farhan, can you respin your patch with proper sign-off and patch description? -Adding qemu-block. - -Hi Christian, - -On 19.07.2018 [08:55:20 +0200], Christian Borntraeger wrote: -> -> -> -On 07/18/2018 08:52 PM, Nishanth Aravamudan wrote: -> -> On 18.07.2018 [11:10:27 -0400], Farhan Ali wrote: -> ->> -> ->> -> ->> On 07/18/2018 09:42 AM, Farhan Ali wrote: -<snip> - -> ->> I am not too familiar with block device code in QEMU, so not sure if -> ->> this is the right fix or if there are some underlying problems. -> -> -> -> Oh this is quite embarassing! I only added the bdrv_attach_aio_context -> -> callback for the file-backed device. Your fix is definitely corect for -> -> host device. Let me make sure there weren't any others missed and I will -> -> send out a properly formatted patch. Thank you for the quick testing and -> -> turnaround! -> -> -Farhan, can you respin your patch with proper sign-off and patch description? -> -Adding qemu-block. -I sent it yesterday, sorry I didn't cc everyone from this e-mail: -http://lists.nongnu.org/archive/html/qemu-block/2018-07/msg00516.html -Thanks, -Nish - diff --git a/results/classifier/013/user-level/28596630 b/results/classifier/013/user-level/28596630 deleted file mode 100644 index 28a1f444d..000000000 --- a/results/classifier/013/user-level/28596630 +++ /dev/null @@ -1,141 +0,0 @@ -user-level: 0.856 -operating system: 0.853 -register: 0.839 -device: 0.835 -architecture: 0.818 -semantic: 0.814 -mistranslation: 0.813 -peripherals: 0.802 -ppc: 0.799 -performance: 0.797 -permissions: 0.791 -graphic: 0.785 -network: 0.780 -hypervisor: 0.775 -kernel: 0.770 -arm: 0.756 -PID: 0.750 -virtual: 0.742 -assembly: 0.725 -system: 0.716 -debug: 0.704 -risc-v: 0.702 -socket: 0.697 -vnc: 0.674 -TCG: 0.668 -x86: 0.654 -VMM: 0.650 -KVM: 0.649 -files: 0.630 -alpha: 0.624 -i386: 0.611 -boot: 0.609 - -[Qemu-devel] [BUG] [low severity] a strange appearance of message involving slirp while doing "empty" make - -Folks, - -If qemu tree is already fully built, and "make" is attempted, for 3.1, the -outcome is: - -$ make - CHK version_gen.h -$ - -For 4.0-rc0, the outcome seems to be different: - -$ make -make[1]: Entering directory '/home/build/malta-mips64r6/qemu-4.0/slirp' -make[1]: Nothing to be done for 'all'. -make[1]: Leaving directory '/home/build/malta-mips64r6/qemu-4.0/slirp' - CHK version_gen.h -$ - -Not sure how significant is that, but I report it just in case. - -Yours, -Aleksandar - -On 20/03/2019 22.08, Aleksandar Markovic wrote: -> -Folks, -> -> -If qemu tree is already fully built, and "make" is attempted, for 3.1, the -> -outcome is: -> -> -$ make -> -CHK version_gen.h -> -$ -> -> -For 4.0-rc0, the outcome seems to be different: -> -> -$ make -> -make[1]: Entering directory '/home/build/malta-mips64r6/qemu-4.0/slirp' -> -make[1]: Nothing to be done for 'all'. -> -make[1]: Leaving directory '/home/build/malta-mips64r6/qemu-4.0/slirp' -> -CHK version_gen.h -> -$ -> -> -Not sure how significant is that, but I report it just in case. -It's likely because slirp is currently being reworked to become a -separate project, so the makefiles have been changed a little bit. I -guess the message will go away again once slirp has become a stand-alone -library. - - Thomas - -On Fri, 22 Mar 2019 at 04:59, Thomas Huth <address@hidden> wrote: -> -On 20/03/2019 22.08, Aleksandar Markovic wrote: -> -> $ make -> -> make[1]: Entering directory '/home/build/malta-mips64r6/qemu-4.0/slirp' -> -> make[1]: Nothing to be done for 'all'. -> -> make[1]: Leaving directory '/home/build/malta-mips64r6/qemu-4.0/slirp' -> -> CHK version_gen.h -> -> $ -> -> -> -> Not sure how significant is that, but I report it just in case. -> -> -It's likely because slirp is currently being reworked to become a -> -separate project, so the makefiles have been changed a little bit. I -> -guess the message will go away again once slirp has become a stand-alone -> -library. -Well, we'll still need to ship slirp for the foreseeable future... - -I think the cause of this is that the rule in Makefile for -calling the slirp Makefile is not passing it $(SUBDIR_MAKEFLAGS) -like all the other recursive make invocations. If we do that -then we'll suppress the entering/leaving messages for -non-verbose builds. (Some tweaking will be needed as -it looks like the slirp makefile has picked an incompatible -meaning for $BUILD_DIR, which the SUBDIR_MAKEFLAGS will -also be passing to it.) - -thanks --- PMM - diff --git a/results/classifier/013/user-level/56309929 b/results/classifier/013/user-level/56309929 deleted file mode 100644 index a5fe92165..000000000 --- a/results/classifier/013/user-level/56309929 +++ /dev/null @@ -1,208 +0,0 @@ -user-level: 0.684 -register: 0.658 -device: 0.646 -VMM: 0.643 -TCG: 0.637 -KVM: 0.636 -virtual: 0.623 -assembly: 0.618 -performance: 0.608 -ppc: 0.606 -vnc: 0.600 -network: 0.589 -permissions: 0.587 -debug: 0.585 -arm: 0.580 -architecture: 0.579 -boot: 0.578 -risc-v: 0.574 -graphic: 0.570 -PID: 0.561 -mistranslation: 0.554 -system: 0.545 -operating system: 0.543 -hypervisor: 0.521 -semantic: 0.521 -socket: 0.516 -kernel: 0.487 -alpha: 0.480 -x86: 0.465 -peripherals: 0.387 -files: 0.311 -i386: 0.269 - -[Qemu-devel] [BUG 2.6] Broken CONFIG_TPM? - -A compilation test with clang -Weverything reported this problem: - -config-host.h:112:20: warning: '$' in identifier -[-Wdollar-in-identifier-extension] - -The line of code looks like this: - -#define CONFIG_TPM $(CONFIG_SOFTMMU) - -This is fine for Makefile code, but won't work as expected in C code. - -Am 28.04.2016 um 22:33 schrieb Stefan Weil: -> -A compilation test with clang -Weverything reported this problem: -> -> -config-host.h:112:20: warning: '$' in identifier -> -[-Wdollar-in-identifier-extension] -> -> -The line of code looks like this: -> -> -#define CONFIG_TPM $(CONFIG_SOFTMMU) -> -> -This is fine for Makefile code, but won't work as expected in C code. -> -A complete 64 bit build with clang -Weverything creates a log file of -1.7 GB. -Here are the uniq warnings sorted by their frequency: - - 1 -Wflexible-array-extensions - 1 -Wgnu-folding-constant - 1 -Wunknown-pragmas - 1 -Wunknown-warning-option - 1 -Wunreachable-code-loop-increment - 2 -Warray-bounds-pointer-arithmetic - 2 -Wdollar-in-identifier-extension - 3 -Woverlength-strings - 3 -Wweak-vtables - 4 -Wgnu-empty-struct - 4 -Wstring-conversion - 6 -Wclass-varargs - 7 -Wc99-extensions - 7 -Wc++-compat - 8 -Wfloat-equal - 11 -Wformat-nonliteral - 16 -Wshift-negative-value - 19 -Wglobal-constructors - 28 -Wc++11-long-long - 29 -Wembedded-directive - 38 -Wvla - 40 -Wcovered-switch-default - 40 -Wmissing-variable-declarations - 49 -Wold-style-cast - 53 -Wgnu-conditional-omitted-operand - 56 -Wformat-pedantic - 61 -Wvariadic-macros - 77 -Wc++11-extensions - 83 -Wgnu-flexible-array-initializer - 83 -Wzero-length-array - 96 -Wgnu-designator - 102 -Wmissing-noreturn - 103 -Wconditional-uninitialized - 107 -Wdisabled-macro-expansion - 115 -Wunreachable-code-return - 134 -Wunreachable-code - 243 -Wunreachable-code-break - 257 -Wfloat-conversion - 280 -Wswitch-enum - 291 -Wpointer-arith - 298 -Wshadow - 378 -Wassign-enum - 395 -Wused-but-marked-unused - 420 -Wreserved-id-macro - 493 -Wdocumentation - 510 -Wshift-sign-overflow - 565 -Wgnu-case-range - 566 -Wgnu-zero-variadic-macro-arguments - 650 -Wbad-function-cast - 705 -Wmissing-field-initializers - 817 -Wgnu-statement-expression - 968 -Wdocumentation-unknown-command - 1021 -Wextra-semi - 1112 -Wgnu-empty-initializer - 1138 -Wcast-qual - 1509 -Wcast-align - 1766 -Wextended-offsetof - 1937 -Wsign-compare - 2130 -Wpacked - 2404 -Wunused-macros - 3081 -Wpadded - 4182 -Wconversion - 5430 -Wlanguage-extension-token - 6655 -Wshorten-64-to-32 - 6995 -Wpedantic - 7354 -Wunused-parameter - 27659 -Wsign-conversion - -Stefan Weil <address@hidden> writes: - -> -A compilation test with clang -Weverything reported this problem: -> -> -config-host.h:112:20: warning: '$' in identifier -> -[-Wdollar-in-identifier-extension] -> -> -The line of code looks like this: -> -> -#define CONFIG_TPM $(CONFIG_SOFTMMU) -> -> -This is fine for Makefile code, but won't work as expected in C code. -Broken in commit 3b8acc1 "configure: fix TPM logic". Cc'ing Paolo. - -Impact: #ifdef CONFIG_TPM never disables code. There are no other uses -of CONFIG_TPM in C code. - -I had a quick peek at configure and create_config, but refrained from -attempting to fix this, since I don't understand when exactly CONFIG_TPM -should be defined. - -On 29 April 2016 at 08:42, Markus Armbruster <address@hidden> wrote: -> -Stefan Weil <address@hidden> writes: -> -> -> A compilation test with clang -Weverything reported this problem: -> -> -> -> config-host.h:112:20: warning: '$' in identifier -> -> [-Wdollar-in-identifier-extension] -> -> -> -> The line of code looks like this: -> -> -> -> #define CONFIG_TPM $(CONFIG_SOFTMMU) -> -> -> -> This is fine for Makefile code, but won't work as expected in C code. -> -> -Broken in commit 3b8acc1 "configure: fix TPM logic". Cc'ing Paolo. -> -> -Impact: #ifdef CONFIG_TPM never disables code. There are no other uses -> -of CONFIG_TPM in C code. -> -> -I had a quick peek at configure and create_config, but refrained from -> -attempting to fix this, since I don't understand when exactly CONFIG_TPM -> -should be defined. -Looking at 'git blame' suggests this has been wrong like this for -some years, so we don't need to scramble to fix it for 2.6. - -thanks --- PMM - diff --git a/results/classifier/013/user-level/71456293 b/results/classifier/013/user-level/71456293 deleted file mode 100644 index f4a66ca22..000000000 --- a/results/classifier/013/user-level/71456293 +++ /dev/null @@ -1,1514 +0,0 @@ -user-level: 0.699 -KVM: 0.691 -operating system: 0.670 -mistranslation: 0.659 -hypervisor: 0.656 -peripherals: 0.646 -TCG: 0.642 -ppc: 0.642 -x86: 0.637 -i386: 0.633 -virtual: 0.629 -vnc: 0.625 -risc-v: 0.621 -VMM: 0.621 -debug: 0.620 -kernel: 0.620 -PID: 0.614 -permissions: 0.613 -register: 0.609 -graphic: 0.603 -assembly: 0.602 -device: 0.601 -semantic: 0.600 -alpha: 0.598 -arm: 0.598 -boot: 0.598 -socket: 0.596 -architecture: 0.594 -performance: 0.594 -system: 0.592 -files: 0.592 -network: 0.491 - -[Qemu-devel][bug] qemu crash when migrate vm and vm's disks - -When migrate vm and vmâs disks target host qemu crash due to an invalid free. -#0 object_unref (obj=0x1000) at /qemu-2.12/rpmbuild/BUILD/qemu-2.12/qom/object.c:920 -#1 0x0000560434d79e79 in memory_region_unref (mr=<optimized out>) -at /qemu-2.12/rpmbuild/BUILD/qemu-2.12/memory.c:1730 -#2 flatview_destroy (view=0x560439653880) at /qemu-2.12/rpmbuild/BUILD/qemu-2.12/memory.c:292 -#3 0x000056043514dfbe in call_rcu_thread (opaque=<optimized out>) -at /qemu-2.12/rpmbuild/BUILD/qemu-2.12/util/rcu.c:284 -#4 0x00007fbc2b36fe25 in start_thread () from /lib64/libpthread.so.0 -#5 0x00007fbc2b099bad in clone () from /lib64/libc.so.6 -test base qemu-2.12.0 -ï¼ -but use lastest qemu(v6.0.0-rc2) also reproduce. -As follow patch can resolve this problem: -https://lists.gnu.org/archive/html/qemu-devel/2018-07/msg02272.html -Steps to reproduce: -(1) Create VM (virsh define) -(2) Add 64 virtio scsi disks -(3) migrate vm and vmâdisks -------------------------------------------------------------------------------------------------------------------------------------- -æ¬é®ä»¶åå ¶é件嫿æ°åä¸éå¢çä¿å¯ä¿¡æ¯ï¼ä» éäºåéç»ä¸é¢å°åä¸ååº -ç个人æç¾¤ç»ãç¦æ¢ä»»ä½å ¶ä»äººä»¥ä»»ä½å½¢å¼ä½¿ç¨ï¼å æ¬ä½ä¸éäºå ¨é¨æé¨åå°æ³é²ãå¤å¶ã -ææ£åï¼æ¬é®ä»¶ä¸çä¿¡æ¯ã妿æ¨éæ¶äºæ¬é®ä»¶ï¼è¯·æ¨ç«å³çµè¯æé®ä»¶éç¥å件人并å 餿¬ -é®ä»¶ï¼ -This e-mail and its attachments contain confidential information from New H3C, which is -intended only for the person or entity whose address is listed above. Any use of the -information contained herein in any way (including, but not limited to, total or partial -disclosure, reproduction, or dissemination) by persons other than the intended -recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender -by phone or email immediately and delete it! - -* Yuchen (yu.chen@h3c.com) wrote: -> -When migrate vm and vmâs disks target host qemu crash due to an invalid free. -> -> -#0 object_unref (obj=0x1000) at -> -/qemu-2.12/rpmbuild/BUILD/qemu-2.12/qom/object.c:920 -> -#1 0x0000560434d79e79 in memory_region_unref (mr=<optimized out>) -> -at /qemu-2.12/rpmbuild/BUILD/qemu-2.12/memory.c:1730 -> -#2 flatview_destroy (view=0x560439653880) at -> -/qemu-2.12/rpmbuild/BUILD/qemu-2.12/memory.c:292 -> -#3 0x000056043514dfbe in call_rcu_thread (opaque=<optimized out>) -> -at /qemu-2.12/rpmbuild/BUILD/qemu-2.12/util/rcu.c:284 -> -#4 0x00007fbc2b36fe25 in start_thread () from /lib64/libpthread.so.0 -> -#5 0x00007fbc2b099bad in clone () from /lib64/libc.so.6 -> -> -test base qemu-2.12.0ï¼but use lastest qemu(v6.0.0-rc2) also reproduce. -Interesting. - -> -As follow patch can resolve this problem: -> -https://lists.gnu.org/archive/html/qemu-devel/2018-07/msg02272.html -That's a pci/rcu change; ccing Paolo and Micahel. - -> -Steps to reproduce: -> -(1) Create VM (virsh define) -> -(2) Add 64 virtio scsi disks -Is that hot adding the disks later, or are they included in the VM at -creation? -Can you provide a libvirt XML example? - -> -(3) migrate vm and vmâdisks -What do you mean by 'and vm disks' - are you doing a block migration? - -Dave - -> -------------------------------------------------------------------------------------------------------------------------------------- -> -æ¬é®ä»¶åå ¶é件嫿æ°åä¸éå¢çä¿å¯ä¿¡æ¯ï¼ä» éäºåéç»ä¸é¢å°åä¸ååº -> -ç个人æç¾¤ç»ãç¦æ¢ä»»ä½å ¶ä»äººä»¥ä»»ä½å½¢å¼ä½¿ç¨ï¼å æ¬ä½ä¸éäºå ¨é¨æé¨åå°æ³é²ãå¤å¶ã -> -ææ£åï¼æ¬é®ä»¶ä¸çä¿¡æ¯ã妿æ¨éæ¶äºæ¬é®ä»¶ï¼è¯·æ¨ç«å³çµè¯æé®ä»¶éç¥å件人并å 餿¬ -> -é®ä»¶ï¼ -> -This e-mail and its attachments contain confidential information from New -> -H3C, which is -> -intended only for the person or entity whose address is listed above. Any use -> -of the -> -information contained herein in any way (including, but not limited to, total -> -or partial -> -disclosure, reproduction, or dissemination) by persons other than the intended -> -recipient(s) is prohibited. If you receive this e-mail in error, please -> -notify the sender -> -by phone or email immediately and delete it! --- -Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK - -> ------é®ä»¶åä»¶----- -> -å件人: Dr. David Alan Gilbert [ -mailto:dgilbert@redhat.com -] -> -åéæ¶é´: 2021å¹´4æ8æ¥ 19:27 -> -æ¶ä»¶äºº: yuchen (Cloud) <yu.chen@h3c.com>; pbonzini@redhat.com; -> -mst@redhat.com -> -æé: qemu-devel@nongnu.org -> -主é¢: Re: [Qemu-devel][bug] qemu crash when migrate vm and vm's disks -> -> -* Yuchen (yu.chen@h3c.com) wrote: -> -> When migrate vm and vmâs disks target host qemu crash due to an invalid -> -free. -> -> -> -> #0 object_unref (obj=0x1000) at -> -> /qemu-2.12/rpmbuild/BUILD/qemu-2.12/qom/object.c:920 -> -> #1 0x0000560434d79e79 in memory_region_unref (mr=<optimized out>) -> -> at /qemu-2.12/rpmbuild/BUILD/qemu-2.12/memory.c:1730 -> -> #2 flatview_destroy (view=0x560439653880) at -> -> /qemu-2.12/rpmbuild/BUILD/qemu-2.12/memory.c:292 -> -> #3 0x000056043514dfbe in call_rcu_thread (opaque=<optimized out>) -> -> at /qemu-2.12/rpmbuild/BUILD/qemu-2.12/util/rcu.c:284 -> -> #4 0x00007fbc2b36fe25 in start_thread () from /lib64/libpthread.so.0 -> -> #5 0x00007fbc2b099bad in clone () from /lib64/libc.so.6 -> -> -> -> test base qemu-2.12.0ï¼but use lastest qemu(v6.0.0-rc2) also reproduce. -> -> -Interesting. -> -> -> As follow patch can resolve this problem: -> -> -https://lists.gnu.org/archive/html/qemu-devel/2018-07/msg02272.html -> -> -That's a pci/rcu change; ccing Paolo and Micahel. -> -> -> Steps to reproduce: -> -> (1) Create VM (virsh define) -> -> (2) Add 64 virtio scsi disks -> -> -Is that hot adding the disks later, or are they included in the VM at -> -creation? -> -Can you provide a libvirt XML example? -> -Include disks in the VM at creation - -vm disks xml (only virtio scsi disks): - <devices> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native'/> - <source file='/vms/tempp/vm-os'/> - <target dev='vda' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x00' slot='0x08' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data1'/> - <target dev='sda' bus='scsi'/> - <address type='drive' controller='2' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data2'/> - <target dev='sdb' bus='scsi'/> - <address type='drive' controller='3' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data3'/> - <target dev='sdc' bus='scsi'/> - <address type='drive' controller='4' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data4'/> - <target dev='sdd' bus='scsi'/> - <address type='drive' controller='5' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data5'/> - <target dev='sde' bus='scsi'/> - <address type='drive' controller='6' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data6'/> - <target dev='sdf' bus='scsi'/> - <address type='drive' controller='7' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data7'/> - <target dev='sdg' bus='scsi'/> - <address type='drive' controller='8' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data8'/> - <target dev='sdh' bus='scsi'/> - <address type='drive' controller='9' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data9'/> - <target dev='sdi' bus='scsi'/> - <address type='drive' controller='10' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data10'/> - <target dev='sdj' bus='scsi'/> - <address type='drive' controller='11' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data11'/> - <target dev='sdk' bus='scsi'/> - <address type='drive' controller='12' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data12'/> - <target dev='sdl' bus='scsi'/> - <address type='drive' controller='13' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data13'/> - <target dev='sdm' bus='scsi'/> - <address type='drive' controller='14' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data14'/> - <target dev='sdn' bus='scsi'/> - <address type='drive' controller='15' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data15'/> - <target dev='sdo' bus='scsi'/> - <address type='drive' controller='16' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data16'/> - <target dev='sdp' bus='scsi'/> - <address type='drive' controller='17' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data17'/> - <target dev='sdq' bus='scsi'/> - <address type='drive' controller='18' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data18'/> - <target dev='sdr' bus='scsi'/> - <address type='drive' controller='19' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data19'/> - <target dev='sds' bus='scsi'/> - <address type='drive' controller='20' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data20'/> - <target dev='sdt' bus='scsi'/> - <address type='drive' controller='21' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data21'/> - <target dev='sdu' bus='scsi'/> - <address type='drive' controller='22' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data22'/> - <target dev='sdv' bus='scsi'/> - <address type='drive' controller='23' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data23'/> - <target dev='sdw' bus='scsi'/> - <address type='drive' controller='24' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data24'/> - <target dev='sdx' bus='scsi'/> - <address type='drive' controller='25' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data25'/> - <target dev='sdy' bus='scsi'/> - <address type='drive' controller='26' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data26'/> - <target dev='sdz' bus='scsi'/> - <address type='drive' controller='27' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data27'/> - <target dev='sdaa' bus='scsi'/> - <address type='drive' controller='28' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data28'/> - <target dev='sdab' bus='scsi'/> - <address type='drive' controller='29' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data29'/> - <target dev='sdac' bus='scsi'/> - <address type='drive' controller='30' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data30'/> - <target dev='sdad' bus='scsi'/> - <address type='drive' controller='31' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data31'/> - <target dev='sdae' bus='scsi'/> - <address type='drive' controller='32' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data32'/> - <target dev='sdaf' bus='scsi'/> - <address type='drive' controller='33' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data33'/> - <target dev='sdag' bus='scsi'/> - <address type='drive' controller='34' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data34'/> - <target dev='sdah' bus='scsi'/> - <address type='drive' controller='35' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data35'/> - <target dev='sdai' bus='scsi'/> - <address type='drive' controller='36' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data36'/> - <target dev='sdaj' bus='scsi'/> - <address type='drive' controller='37' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data37'/> - <target dev='sdak' bus='scsi'/> - <address type='drive' controller='38' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data38'/> - <target dev='sdal' bus='scsi'/> - <address type='drive' controller='39' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data39'/> - <target dev='sdam' bus='scsi'/> - <address type='drive' controller='40' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data40'/> - <target dev='sdan' bus='scsi'/> - <address type='drive' controller='41' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data41'/> - <target dev='sdao' bus='scsi'/> - <address type='drive' controller='42' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data42'/> - <target dev='sdap' bus='scsi'/> - <address type='drive' controller='43' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data43'/> - <target dev='sdaq' bus='scsi'/> - <address type='drive' controller='44' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data44'/> - <target dev='sdar' bus='scsi'/> - <address type='drive' controller='45' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data45'/> - <target dev='sdas' bus='scsi'/> - <address type='drive' controller='46' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data46'/> - <target dev='sdat' bus='scsi'/> - <address type='drive' controller='47' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data47'/> - <target dev='sdau' bus='scsi'/> - <address type='drive' controller='48' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data48'/> - <target dev='sdav' bus='scsi'/> - <address type='drive' controller='49' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data49'/> - <target dev='sdaw' bus='scsi'/> - <address type='drive' controller='50' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data50'/> - <target dev='sdax' bus='scsi'/> - <address type='drive' controller='51' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data51'/> - <target dev='sday' bus='scsi'/> - <address type='drive' controller='52' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data52'/> - <target dev='sdaz' bus='scsi'/> - <address type='drive' controller='53' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data53'/> - <target dev='sdba' bus='scsi'/> - <address type='drive' controller='54' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data54'/> - <target dev='sdbb' bus='scsi'/> - <address type='drive' controller='55' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data55'/> - <target dev='sdbc' bus='scsi'/> - <address type='drive' controller='56' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data56'/> - <target dev='sdbd' bus='scsi'/> - <address type='drive' controller='57' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data57'/> - <target dev='sdbe' bus='scsi'/> - <address type='drive' controller='58' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data58'/> - <target dev='sdbf' bus='scsi'/> - <address type='drive' controller='59' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data59'/> - <target dev='sdbg' bus='scsi'/> - <address type='drive' controller='60' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data60'/> - <target dev='sdbh' bus='scsi'/> - <address type='drive' controller='61' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data61'/> - <target dev='sdbi' bus='scsi'/> - <address type='drive' controller='62' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data62'/> - <target dev='sdbj' bus='scsi'/> - <address type='drive' controller='63' bus='0' target='0' unit='0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data63'/> - <target dev='sdbk' bus='scsi'/> - <address type='drive' controller='64' bus='0' target='0' unit='0'/> - </disk> - <controller type='scsi' index='0'> - <address type='pci' domain='0x0000' bus='0x01' slot='0x02' -function='0x0'/> - </controller> - <controller type='scsi' index='1' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x00' slot='0x06' -function='0x0'/> - </controller> - <controller type='scsi' index='2' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x01' slot='0x01' -function='0x0'/> - </controller> - <controller type='scsi' index='3' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x01' slot='0x03' -function='0x0'/> - </controller> - <controller type='scsi' index='4' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x01' slot='0x04' -function='0x0'/> - </controller> - <controller type='scsi' index='5' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x01' slot='0x05' -function='0x0'/> - </controller> - <controller type='scsi' index='6' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x01' slot='0x06' -function='0x0'/> - </controller> - <controller type='scsi' index='7' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x01' slot='0x07' -function='0x0'/> - </controller> - <controller type='scsi' index='8' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x01' slot='0x08' -function='0x0'/> - </controller> - <controller type='scsi' index='9' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x01' slot='0x09' -function='0x0'/> - </controller> - <controller type='scsi' index='10' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x01' slot='0x0a' -function='0x0'/> - </controller> - <controller type='scsi' index='11' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x01' slot='0x0b' -function='0x0'/> - </controller> - <controller type='scsi' index='12' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x01' slot='0x0c' -function='0x0'/> - </controller> - <controller type='scsi' index='13' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x01' slot='0x0d' -function='0x0'/> - </controller> - <controller type='scsi' index='14' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x01' slot='0x0e' -function='0x0'/> - </controller> - <controller type='scsi' index='15' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x01' slot='0x0f' -function='0x0'/> - </controller> - <controller type='scsi' index='16' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x01' slot='0x10' -function='0x0'/> - </controller> - <controller type='scsi' index='17' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x01' slot='0x11' -function='0x0'/> - </controller> - <controller type='scsi' index='18' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x01' slot='0x12' -function='0x0'/> - </controller> - <controller type='scsi' index='19' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x01' slot='0x13' -function='0x0'/> - </controller> - <controller type='scsi' index='20' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x01' slot='0x14' -function='0x0'/> - </controller> - <controller type='scsi' index='21' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x01' slot='0x15' -function='0x0'/> - </controller> - <controller type='scsi' index='22' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x01' slot='0x16' -function='0x0'/> - </controller> - <controller type='scsi' index='23' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x01' slot='0x17' -function='0x0'/> - </controller> - <controller type='scsi' index='24' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x01' slot='0x18' -function='0x0'/> - </controller> - <controller type='scsi' index='25' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x01' slot='0x19' -function='0x0'/> - </controller> - <controller type='scsi' index='26' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x01' slot='0x1a' -function='0x0'/> - </controller> - <controller type='scsi' index='27' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x01' slot='0x1b' -function='0x0'/> - </controller> - <controller type='scsi' index='28' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x01' slot='0x1c' -function='0x0'/> - </controller> - <controller type='scsi' index='29' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x01' slot='0x1d' -function='0x0'/> - </controller> - <controller type='scsi' index='30' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x01' slot='0x1e' -function='0x0'/> - </controller> - <controller type='scsi' index='31' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x02' slot='0x01' -function='0x0'/> - </controller> - <controller type='scsi' index='32' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x02' slot='0x02' -function='0x0'/> - </controller> - <controller type='scsi' index='33' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x02' slot='0x03' -function='0x0'/> - </controller> - <controller type='scsi' index='34' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x02' slot='0x04' -function='0x0'/> - </controller> - <controller type='scsi' index='35' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x02' slot='0x05' -function='0x0'/> - </controller> - <controller type='scsi' index='36' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x02' slot='0x06' -function='0x0'/> - </controller> - <controller type='scsi' index='37' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x02' slot='0x07' -function='0x0'/> - </controller> - <controller type='scsi' index='38' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x02' slot='0x08' -function='0x0'/> - </controller> - <controller type='scsi' index='39' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x02' slot='0x09' -function='0x0'/> - </controller> - <controller type='scsi' index='40' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x02' slot='0x0a' -function='0x0'/> - </controller> - <controller type='scsi' index='41' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x02' slot='0x0b' -function='0x0'/> - </controller> - <controller type='scsi' index='42' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x02' slot='0x0c' -function='0x0'/> - </controller> - <controller type='scsi' index='43' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x02' slot='0x0d' -function='0x0'/> - </controller> - <controller type='scsi' index='44' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x00' slot='0x03' -function='0x0'/> - </controller> - <controller type='scsi' index='45' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x00' slot='0x09' -function='0x0'/> - </controller> - <controller type='scsi' index='46' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x00' slot='0x0b' -function='0x0'/> - </controller> - <controller type='scsi' index='47' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x00' slot='0x0c' -function='0x0'/> - </controller> - <controller type='scsi' index='48' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x00' slot='0x0d' -function='0x0'/> - </controller> - <controller type='scsi' index='49' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x00' slot='0x0e' -function='0x0'/> - </controller> - <controller type='scsi' index='50' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x00' slot='0x0f' -function='0x0'/> - </controller> - <controller type='scsi' index='51' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x00' slot='0x10' -function='0x0'/> - </controller> - <controller type='scsi' index='52' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x00' slot='0x11' -function='0x0'/> - </controller> - <controller type='scsi' index='53' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x00' slot='0x12' -function='0x0'/> - </controller> - <controller type='scsi' index='54' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x00' slot='0x13' -function='0x0'/> - </controller> - <controller type='scsi' index='55' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x00' slot='0x14' -function='0x0'/> - </controller> - <controller type='scsi' index='56' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x00' slot='0x15' -function='0x0'/> - </controller> - <controller type='scsi' index='57' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x00' slot='0x16' -function='0x0'/> - </controller> - <controller type='scsi' index='58' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x00' slot='0x17' -function='0x0'/> - </controller> - <controller type='scsi' index='59' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x00' slot='0x18' -function='0x0'/> - </controller> - <controller type='scsi' index='60' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x00' slot='0x19' -function='0x0'/> - </controller> - <controller type='scsi' index='61' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x00' slot='0x1a' -function='0x0'/> - </controller> - <controller type='scsi' index='62' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x00' slot='0x1b' -function='0x0'/> - </controller> - <controller type='scsi' index='63' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x00' slot='0x1c' -function='0x0'/> - </controller> - <controller type='scsi' index='64' model='virtio-scsi'> - <address type='pci' domain='0x0000' bus='0x00' slot='0x1d' -function='0x0'/> - </controller> - <controller type='pci' index='0' model='pci-root'/> - <controller type='pci' index='1' model='pci-bridge'> - <model name='pci-bridge'/> - <target chassisNr='1'/> - <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' -function='0x0'/> - </controller> - <controller type='pci' index='2' model='pci-bridge'> - <model name='pci-bridge'/> - <target chassisNr='2'/> - <address type='pci' domain='0x0000' bus='0x01' slot='0x1f' -function='0x0'/> - </controller> - </devices> - -vm disks xml (only virtio disks): - <devices> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native'/> - <source file='/vms/tempp/vm-os'/> - <target dev='vda' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x00' slot='0x08' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data2'/> - <target dev='vdb' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x00' slot='0x06' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data3'/> - <target dev='vdc' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x00' slot='0x09' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data4'/> - <target dev='vdd' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x00' slot='0x0b' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data5'/> - <target dev='vde' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x00' slot='0x0c' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data6'/> - <target dev='vdf' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x00' slot='0x0d' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data7'/> - <target dev='vdg' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x00' slot='0x0e' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data8'/> - <target dev='vdh' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x00' slot='0x0f' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data9'/> - <target dev='vdi' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x00' slot='0x10' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data10'/> - <target dev='vdj' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x00' slot='0x11' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data11'/> - <target dev='vdk' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x00' slot='0x12' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data12'/> - <target dev='vdl' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x00' slot='0x13' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data13'/> - <target dev='vdm' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x00' slot='0x14' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data14'/> - <target dev='vdn' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x00' slot='0x15' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data15'/> - <target dev='vdo' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x00' slot='0x16' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data16'/> - <target dev='vdp' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x00' slot='0x17' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data17'/> - <target dev='vdq' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x00' slot='0x18' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data18'/> - <target dev='vdr' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x00' slot='0x19' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data19'/> - <target dev='vds' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x00' slot='0x1a' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data20'/> - <target dev='vdt' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x00' slot='0x1b' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data21'/> - <target dev='vdu' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x00' slot='0x1c' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data22'/> - <target dev='vdv' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x00' slot='0x1d' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data23'/> - <target dev='vdw' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data24'/> - <target dev='vdx' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x01' slot='0x01' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data25'/> - <target dev='vdy' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x01' slot='0x03' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data26'/> - <target dev='vdz' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x01' slot='0x04' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data27'/> - <target dev='vdaa' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x01' slot='0x05' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data28'/> - <target dev='vdab' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x01' slot='0x06' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data29'/> - <target dev='vdac' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x01' slot='0x07' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data30'/> - <target dev='vdad' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x01' slot='0x08' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data31'/> - <target dev='vdae' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x01' slot='0x09' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data32'/> - <target dev='vdaf' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x01' slot='0x0a' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data33'/> - <target dev='vdag' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x01' slot='0x0b' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data34'/> - <target dev='vdah' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x01' slot='0x0c' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data35'/> - <target dev='vdai' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x01' slot='0x0d' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data36'/> - <target dev='vdaj' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x01' slot='0x0e' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data37'/> - <target dev='vdak' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x01' slot='0x0f' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data38'/> - <target dev='vdal' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x01' slot='0x10' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data39'/> - <target dev='vdam' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x01' slot='0x11' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data40'/> - <target dev='vdan' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x01' slot='0x12' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data41'/> - <target dev='vdao' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x01' slot='0x13' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data42'/> - <target dev='vdap' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x01' slot='0x14' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data43'/> - <target dev='vdaq' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x01' slot='0x15' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data44'/> - <target dev='vdar' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x01' slot='0x16' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data45'/> - <target dev='vdas' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x01' slot='0x17' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data46'/> - <target dev='vdat' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x01' slot='0x18' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data47'/> - <target dev='vdau' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x01' slot='0x19' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data48'/> - <target dev='vdav' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x01' slot='0x1a' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data49'/> - <target dev='vdaw' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x01' slot='0x1b' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data50'/> - <target dev='vdax' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x01' slot='0x1c' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data51'/> - <target dev='vday' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x01' slot='0x1d' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data52'/> - <target dev='vdaz' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x01' slot='0x1e' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data53'/> - <target dev='vdba' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x02' slot='0x01' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data54'/> - <target dev='vdbb' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x02' slot='0x02' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data55'/> - <target dev='vdbc' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x02' slot='0x03' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data56'/> - <target dev='vdbd' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x02' slot='0x04' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data57'/> - <target dev='vdbe' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x02' slot='0x05' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data58'/> - <target dev='vdbf' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x02' slot='0x06' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data59'/> - <target dev='vdbg' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x02' slot='0x07' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data60'/> - <target dev='vdbh' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x02' slot='0x08' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data61'/> - <target dev='vdbi' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x02' slot='0x09' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data62'/> - <target dev='vdbj' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x02' slot='0x0a' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data63'/> - <target dev='vdbk' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x02' slot='0x0b' -function='0x0'/> - </disk> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2' cache='directsync' io='native' -discard='unmap'/> - <source file='/vms/tempp/vm-data1'/> - <target dev='vdbl' bus='virtio'/> - <address type='pci' domain='0x0000' bus='0x00' slot='0x03' -function='0x0'/> - </disk> - <controller type='pci' index='0' model='pci-root'/> - <controller type='pci' index='1' model='pci-bridge'> - <model name='pci-bridge'/> - <target chassisNr='1'/> - <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' -function='0x0'/> - </controller> - <controller type='pci' index='2' model='pci-bridge'> - <model name='pci-bridge'/> - <target chassisNr='2'/> - <address type='pci' domain='0x0000' bus='0x01' slot='0x1f' -function='0x0'/> - </controller> - </devices> - -> -> (3) migrate vm and vmâdisks -> -> -What do you mean by 'and vm disks' - are you doing a block migration? -> -Yes, block migration. -In fact, only migration domain also reproduced. - -> -Dave -> -> -> ---------------------------------------------------------------------- -> -> --------------------------------------------------------------- -> -Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK -------------------------------------------------------------------------------------------------------------------------------------- -æ¬é®ä»¶åå ¶é件嫿æ°åä¸éå¢çä¿å¯ä¿¡æ¯ï¼ä» éäºåéç»ä¸é¢å°åä¸ååº -ç个人æç¾¤ç»ãç¦æ¢ä»»ä½å ¶ä»äººä»¥ä»»ä½å½¢å¼ä½¿ç¨ï¼å æ¬ä½ä¸éäºå ¨é¨æé¨åå°æ³é²ãå¤å¶ã -ææ£åï¼æ¬é®ä»¶ä¸çä¿¡æ¯ã妿æ¨éæ¶äºæ¬é®ä»¶ï¼è¯·æ¨ç«å³çµè¯æé®ä»¶éç¥å件人并å 餿¬ -é®ä»¶ï¼ -This e-mail and its attachments contain confidential information from New H3C, -which is -intended only for the person or entity whose address is listed above. Any use -of the -information contained herein in any way (including, but not limited to, total -or partial -disclosure, reproduction, or dissemination) by persons other than the intended -recipient(s) is prohibited. If you receive this e-mail in error, please notify -the sender -by phone or email immediately and delete it! - diff --git a/results/classifier/013/user-level/74466963 b/results/classifier/013/user-level/74466963 deleted file mode 100644 index d59c29b49..000000000 --- a/results/classifier/013/user-level/74466963 +++ /dev/null @@ -1,1906 +0,0 @@ -user-level: 0.927 -mistranslation: 0.927 -assembly: 0.910 -virtual: 0.910 -device: 0.909 -permissions: 0.907 -KVM: 0.903 -debug: 0.897 -register: 0.896 -files: 0.896 -graphic: 0.895 -TCG: 0.895 -boot: 0.894 -arm: 0.893 -architecture: 0.893 -performance: 0.892 -VMM: 0.892 -semantic: 0.891 -risc-v: 0.890 -PID: 0.886 -socket: 0.879 -vnc: 0.878 -network: 0.871 -system: 0.870 -kernel: 0.869 -alpha: 0.868 -operating system: 0.865 -x86: 0.857 -ppc: 0.830 -peripherals: 0.802 -hypervisor: 0.775 -i386: 0.618 - -[Qemu-devel] [TCG only][Migration Bug? ] Occasionally, the content of VM's memory is inconsistent between Source and Destination of migration - -Hi all, - -Does anyboday remember the similar issue post by hailiang months ago -http://patchwork.ozlabs.org/patch/454322/ -At least tow bugs about migration had been fixed since that. -And now we found the same issue at the tcg vm(kvm is fine), after -migration, the content VM's memory is inconsistent. -we add a patch to check memory content, you can find it from affix - -steps to reporduce: -1) apply the patch and re-build qemu -2) prepare the ubuntu guest and run memtest in grub. -soruce side: -x86_64-softmmu/qemu-system-x86_64 -netdev tap,id=hn0 -device -e1000,id=net-pci0,netdev=hn0,mac=52:54:00:12:34:65 -boot c -drive -if=none,file=/home/lizj/ubuntu.raw,id=drive-virtio-disk0 -device -virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 --vnc :7 -m 128 -smp 1 -device piix3-usb-uhci -device usb-tablet -qmp -tcp::4444,server,nowait -monitor stdio -cpu qemu64 -machine -pc-i440fx-2.3,accel=tcg,usb=off -destination side: -x86_64-softmmu/qemu-system-x86_64 -netdev tap,id=hn0 -device -e1000,id=net-pci0,netdev=hn0,mac=52:54:00:12:34:65 -boot c -drive -if=none,file=/home/lizj/ubuntu.raw,id=drive-virtio-disk0 -device -virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 --vnc :7 -m 128 -smp 1 -device piix3-usb-uhci -device usb-tablet -qmp -tcp::4444,server,nowait -monitor stdio -cpu qemu64 -machine -pc-i440fx-2.3,accel=tcg,usb=off -incoming tcp:0:8881 -3) start migration -with 1000M NIC, migration will finish within 3 min. - -at source: -(qemu) migrate tcp:192.168.2.66:8881 -after saving ram complete -e9e725df678d392b1a83b3a917f332bb -qemu-system-x86_64: end ram md5 -(qemu) - -at destination: -...skip... -Completed load of VM with exit code 0 seq iteration 1264 -Completed load of VM with exit code 0 seq iteration 1265 -Completed load of VM with exit code 0 seq iteration 1266 -qemu-system-x86_64: after loading state section id 2(ram) -49c2dac7bde0e5e22db7280dcb3824f9 -qemu-system-x86_64: end ram md5 -qemu-system-x86_64: qemu_loadvm_state: after cpu_synchronize_all_post_init - -49c2dac7bde0e5e22db7280dcb3824f9 -qemu-system-x86_64: end ram md5 - -This occurs occasionally and only at tcg machine. It seems that -some pages dirtied in source side don't transferred to destination. -This problem can be reproduced even if we disable virtio. -Is it OK for some pages that not transferred to destination when do -migration ? Or is it a bug? -Any idea... - -=================md5 check patch============================= - -diff --git a/Makefile.target b/Makefile.target -index 962d004..e2cb8e9 100644 ---- a/Makefile.target -+++ b/Makefile.target -@@ -139,7 +139,7 @@ obj-y += memory.o cputlb.o - obj-y += memory_mapping.o - obj-y += dump.o - obj-y += migration/ram.o migration/savevm.o --LIBS := $(libs_softmmu) $(LIBS) -+LIBS := $(libs_softmmu) $(LIBS) -lplumb - - # xen support - obj-$(CONFIG_XEN) += xen-common.o -diff --git a/migration/ram.c b/migration/ram.c -index 1eb155a..3b7a09d 100644 ---- a/migration/ram.c -+++ b/migration/ram.c -@@ -2513,7 +2513,7 @@ static int ram_load(QEMUFile *f, void *opaque, int -version_id) -} - - rcu_read_unlock(); -- DPRINTF("Completed load of VM with exit code %d seq iteration " -+ fprintf(stderr, "Completed load of VM with exit code %d seq iteration " - "%" PRIu64 "\n", ret, seq_iter); - return ret; - } -diff --git a/migration/savevm.c b/migration/savevm.c -index 0ad1b93..3feaa61 100644 ---- a/migration/savevm.c -+++ b/migration/savevm.c -@@ -891,6 +891,29 @@ void qemu_savevm_state_header(QEMUFile *f) - - } - -+#include "exec/ram_addr.h" -+#include "qemu/rcu_queue.h" -+#include <clplumbing/md5.h> -+#ifndef MD5_DIGEST_LENGTH -+#define MD5_DIGEST_LENGTH 16 -+#endif -+ -+static void check_host_md5(void) -+{ -+ int i; -+ unsigned char md[MD5_DIGEST_LENGTH]; -+ rcu_read_lock(); -+ RAMBlock *block = QLIST_FIRST_RCU(&ram_list.blocks);/* Only check -'pc.ram' block */ -+ rcu_read_unlock(); -+ -+ MD5(block->host, block->used_length, md); -+ for(i = 0; i < MD5_DIGEST_LENGTH; i++) { -+ fprintf(stderr, "%02x", md[i]); -+ } -+ fprintf(stderr, "\n"); -+ error_report("end ram md5"); -+} -+ - void qemu_savevm_state_begin(QEMUFile *f, - const MigrationParams *params) - { -@@ -1056,6 +1079,10 @@ void qemu_savevm_state_complete_precopy(QEMUFile -*f, bool iterable_only) -save_section_header(f, se, QEMU_VM_SECTION_END); - - ret = se->ops->save_live_complete_precopy(f, se->opaque); -+ -+ fprintf(stderr, "after saving %s complete\n", se->idstr); -+ check_host_md5(); -+ - trace_savevm_section_end(se->idstr, se->section_id, ret); - save_section_footer(f, se); - if (ret < 0) { -@@ -1791,6 +1818,11 @@ static int qemu_loadvm_state_main(QEMUFile *f, -MigrationIncomingState *mis) -section_id, le->se->idstr); - return ret; - } -+ if (section_type == QEMU_VM_SECTION_END) { -+ error_report("after loading state section id %d(%s)", -+ section_id, le->se->idstr); -+ check_host_md5(); -+ } - if (!check_section_footer(f, le)) { - return -EINVAL; - } -@@ -1901,6 +1933,8 @@ int qemu_loadvm_state(QEMUFile *f) - } - - cpu_synchronize_all_post_init(); -+ error_report("%s: after cpu_synchronize_all_post_init\n", __func__); -+ check_host_md5(); - - return ret; - } - -* Li Zhijian (address@hidden) wrote: -> -Hi all, -> -> -Does anyboday remember the similar issue post by hailiang months ago -> -http://patchwork.ozlabs.org/patch/454322/ -> -At least tow bugs about migration had been fixed since that. -Yes, I wondered what happened to that. - -> -And now we found the same issue at the tcg vm(kvm is fine), after migration, -> -the content VM's memory is inconsistent. -Hmm, TCG only - I don't know much about that; but I guess something must -be accessing memory without using the proper macros/functions so -it doesn't mark it as dirty. - -> -we add a patch to check memory content, you can find it from affix -> -> -steps to reporduce: -> -1) apply the patch and re-build qemu -> -2) prepare the ubuntu guest and run memtest in grub. -> -soruce side: -> -x86_64-softmmu/qemu-system-x86_64 -netdev tap,id=hn0 -device -> -e1000,id=net-pci0,netdev=hn0,mac=52:54:00:12:34:65 -boot c -drive -> -if=none,file=/home/lizj/ubuntu.raw,id=drive-virtio-disk0 -device -> -virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 -> --vnc :7 -m 128 -smp 1 -device piix3-usb-uhci -device usb-tablet -qmp -> -tcp::4444,server,nowait -monitor stdio -cpu qemu64 -machine -> -pc-i440fx-2.3,accel=tcg,usb=off -> -> -destination side: -> -x86_64-softmmu/qemu-system-x86_64 -netdev tap,id=hn0 -device -> -e1000,id=net-pci0,netdev=hn0,mac=52:54:00:12:34:65 -boot c -drive -> -if=none,file=/home/lizj/ubuntu.raw,id=drive-virtio-disk0 -device -> -virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 -> --vnc :7 -m 128 -smp 1 -device piix3-usb-uhci -device usb-tablet -qmp -> -tcp::4444,server,nowait -monitor stdio -cpu qemu64 -machine -> -pc-i440fx-2.3,accel=tcg,usb=off -incoming tcp:0:8881 -> -> -3) start migration -> -with 1000M NIC, migration will finish within 3 min. -> -> -at source: -> -(qemu) migrate tcp:192.168.2.66:8881 -> -after saving ram complete -> -e9e725df678d392b1a83b3a917f332bb -> -qemu-system-x86_64: end ram md5 -> -(qemu) -> -> -at destination: -> -...skip... -> -Completed load of VM with exit code 0 seq iteration 1264 -> -Completed load of VM with exit code 0 seq iteration 1265 -> -Completed load of VM with exit code 0 seq iteration 1266 -> -qemu-system-x86_64: after loading state section id 2(ram) -> -49c2dac7bde0e5e22db7280dcb3824f9 -> -qemu-system-x86_64: end ram md5 -> -qemu-system-x86_64: qemu_loadvm_state: after cpu_synchronize_all_post_init -> -> -49c2dac7bde0e5e22db7280dcb3824f9 -> -qemu-system-x86_64: end ram md5 -> -> -This occurs occasionally and only at tcg machine. It seems that -> -some pages dirtied in source side don't transferred to destination. -> -This problem can be reproduced even if we disable virtio. -> -> -Is it OK for some pages that not transferred to destination when do -> -migration ? Or is it a bug? -I'm pretty sure that means it's a bug. Hard to find though, I guess -at least memtest is smaller than a big OS. I think I'd dump the whole -of memory on both sides, hexdump and diff them - I'd guess it would -just be one byte/word different, maybe that would offer some idea what -wrote it. - -Dave - -> -Any idea... -> -> -=================md5 check patch============================= -> -> -diff --git a/Makefile.target b/Makefile.target -> -index 962d004..e2cb8e9 100644 -> ---- a/Makefile.target -> -+++ b/Makefile.target -> -@@ -139,7 +139,7 @@ obj-y += memory.o cputlb.o -> -obj-y += memory_mapping.o -> -obj-y += dump.o -> -obj-y += migration/ram.o migration/savevm.o -> --LIBS := $(libs_softmmu) $(LIBS) -> -+LIBS := $(libs_softmmu) $(LIBS) -lplumb -> -> -# xen support -> -obj-$(CONFIG_XEN) += xen-common.o -> -diff --git a/migration/ram.c b/migration/ram.c -> -index 1eb155a..3b7a09d 100644 -> ---- a/migration/ram.c -> -+++ b/migration/ram.c -> -@@ -2513,7 +2513,7 @@ static int ram_load(QEMUFile *f, void *opaque, int -> -version_id) -> -} -> -> -rcu_read_unlock(); -> -- DPRINTF("Completed load of VM with exit code %d seq iteration " -> -+ fprintf(stderr, "Completed load of VM with exit code %d seq iteration " -> -"%" PRIu64 "\n", ret, seq_iter); -> -return ret; -> -} -> -diff --git a/migration/savevm.c b/migration/savevm.c -> -index 0ad1b93..3feaa61 100644 -> ---- a/migration/savevm.c -> -+++ b/migration/savevm.c -> -@@ -891,6 +891,29 @@ void qemu_savevm_state_header(QEMUFile *f) -> -> -} -> -> -+#include "exec/ram_addr.h" -> -+#include "qemu/rcu_queue.h" -> -+#include <clplumbing/md5.h> -> -+#ifndef MD5_DIGEST_LENGTH -> -+#define MD5_DIGEST_LENGTH 16 -> -+#endif -> -+ -> -+static void check_host_md5(void) -> -+{ -> -+ int i; -> -+ unsigned char md[MD5_DIGEST_LENGTH]; -> -+ rcu_read_lock(); -> -+ RAMBlock *block = QLIST_FIRST_RCU(&ram_list.blocks);/* Only check -> -'pc.ram' block */ -> -+ rcu_read_unlock(); -> -+ -> -+ MD5(block->host, block->used_length, md); -> -+ for(i = 0; i < MD5_DIGEST_LENGTH; i++) { -> -+ fprintf(stderr, "%02x", md[i]); -> -+ } -> -+ fprintf(stderr, "\n"); -> -+ error_report("end ram md5"); -> -+} -> -+ -> -void qemu_savevm_state_begin(QEMUFile *f, -> -const MigrationParams *params) -> -{ -> -@@ -1056,6 +1079,10 @@ void qemu_savevm_state_complete_precopy(QEMUFile *f, -> -bool iterable_only) -> -save_section_header(f, se, QEMU_VM_SECTION_END); -> -> -ret = se->ops->save_live_complete_precopy(f, se->opaque); -> -+ -> -+ fprintf(stderr, "after saving %s complete\n", se->idstr); -> -+ check_host_md5(); -> -+ -> -trace_savevm_section_end(se->idstr, se->section_id, ret); -> -save_section_footer(f, se); -> -if (ret < 0) { -> -@@ -1791,6 +1818,11 @@ static int qemu_loadvm_state_main(QEMUFile *f, -> -MigrationIncomingState *mis) -> -section_id, le->se->idstr); -> -return ret; -> -} -> -+ if (section_type == QEMU_VM_SECTION_END) { -> -+ error_report("after loading state section id %d(%s)", -> -+ section_id, le->se->idstr); -> -+ check_host_md5(); -> -+ } -> -if (!check_section_footer(f, le)) { -> -return -EINVAL; -> -} -> -@@ -1901,6 +1933,8 @@ int qemu_loadvm_state(QEMUFile *f) -> -} -> -> -cpu_synchronize_all_post_init(); -> -+ error_report("%s: after cpu_synchronize_all_post_init\n", __func__); -> -+ check_host_md5(); -> -> -return ret; -> -} -> -> -> --- -Dr. David Alan Gilbert / address@hidden / Manchester, UK - -On 2015/12/3 17:24, Dr. David Alan Gilbert wrote: -* Li Zhijian (address@hidden) wrote: -Hi all, - -Does anyboday remember the similar issue post by hailiang months ago -http://patchwork.ozlabs.org/patch/454322/ -At least tow bugs about migration had been fixed since that. -Yes, I wondered what happened to that. -And now we found the same issue at the tcg vm(kvm is fine), after migration, -the content VM's memory is inconsistent. -Hmm, TCG only - I don't know much about that; but I guess something must -be accessing memory without using the proper macros/functions so -it doesn't mark it as dirty. -we add a patch to check memory content, you can find it from affix - -steps to reporduce: -1) apply the patch and re-build qemu -2) prepare the ubuntu guest and run memtest in grub. -soruce side: -x86_64-softmmu/qemu-system-x86_64 -netdev tap,id=hn0 -device -e1000,id=net-pci0,netdev=hn0,mac=52:54:00:12:34:65 -boot c -drive -if=none,file=/home/lizj/ubuntu.raw,id=drive-virtio-disk0 -device -virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 --vnc :7 -m 128 -smp 1 -device piix3-usb-uhci -device usb-tablet -qmp -tcp::4444,server,nowait -monitor stdio -cpu qemu64 -machine -pc-i440fx-2.3,accel=tcg,usb=off - -destination side: -x86_64-softmmu/qemu-system-x86_64 -netdev tap,id=hn0 -device -e1000,id=net-pci0,netdev=hn0,mac=52:54:00:12:34:65 -boot c -drive -if=none,file=/home/lizj/ubuntu.raw,id=drive-virtio-disk0 -device -virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 --vnc :7 -m 128 -smp 1 -device piix3-usb-uhci -device usb-tablet -qmp -tcp::4444,server,nowait -monitor stdio -cpu qemu64 -machine -pc-i440fx-2.3,accel=tcg,usb=off -incoming tcp:0:8881 - -3) start migration -with 1000M NIC, migration will finish within 3 min. - -at source: -(qemu) migrate tcp:192.168.2.66:8881 -after saving ram complete -e9e725df678d392b1a83b3a917f332bb -qemu-system-x86_64: end ram md5 -(qemu) - -at destination: -...skip... -Completed load of VM with exit code 0 seq iteration 1264 -Completed load of VM with exit code 0 seq iteration 1265 -Completed load of VM with exit code 0 seq iteration 1266 -qemu-system-x86_64: after loading state section id 2(ram) -49c2dac7bde0e5e22db7280dcb3824f9 -qemu-system-x86_64: end ram md5 -qemu-system-x86_64: qemu_loadvm_state: after cpu_synchronize_all_post_init - -49c2dac7bde0e5e22db7280dcb3824f9 -qemu-system-x86_64: end ram md5 - -This occurs occasionally and only at tcg machine. It seems that -some pages dirtied in source side don't transferred to destination. -This problem can be reproduced even if we disable virtio. - -Is it OK for some pages that not transferred to destination when do -migration ? Or is it a bug? -I'm pretty sure that means it's a bug. Hard to find though, I guess -at least memtest is smaller than a big OS. I think I'd dump the whole -of memory on both sides, hexdump and diff them - I'd guess it would -just be one byte/word different, maybe that would offer some idea what -wrote it. -Maybe one better way to do that is with the help of userfaultfd's write-protect -capability. It is still in the development by Andrea Arcangeli, but there -is a RFC version available, please refer to -http://www.spinics.net/lists/linux-mm/msg97422.html -ï¼I'm developing live memory snapshot which based on it, maybe this is another -scene where we -can use userfaultfd's WP ;) ). -Dave -Any idea... - -=================md5 check patch============================= - -diff --git a/Makefile.target b/Makefile.target -index 962d004..e2cb8e9 100644 ---- a/Makefile.target -+++ b/Makefile.target -@@ -139,7 +139,7 @@ obj-y += memory.o cputlb.o - obj-y += memory_mapping.o - obj-y += dump.o - obj-y += migration/ram.o migration/savevm.o --LIBS := $(libs_softmmu) $(LIBS) -+LIBS := $(libs_softmmu) $(LIBS) -lplumb - - # xen support - obj-$(CONFIG_XEN) += xen-common.o -diff --git a/migration/ram.c b/migration/ram.c -index 1eb155a..3b7a09d 100644 ---- a/migration/ram.c -+++ b/migration/ram.c -@@ -2513,7 +2513,7 @@ static int ram_load(QEMUFile *f, void *opaque, int -version_id) - } - - rcu_read_unlock(); -- DPRINTF("Completed load of VM with exit code %d seq iteration " -+ fprintf(stderr, "Completed load of VM with exit code %d seq iteration " - "%" PRIu64 "\n", ret, seq_iter); - return ret; - } -diff --git a/migration/savevm.c b/migration/savevm.c -index 0ad1b93..3feaa61 100644 ---- a/migration/savevm.c -+++ b/migration/savevm.c -@@ -891,6 +891,29 @@ void qemu_savevm_state_header(QEMUFile *f) - - } - -+#include "exec/ram_addr.h" -+#include "qemu/rcu_queue.h" -+#include <clplumbing/md5.h> -+#ifndef MD5_DIGEST_LENGTH -+#define MD5_DIGEST_LENGTH 16 -+#endif -+ -+static void check_host_md5(void) -+{ -+ int i; -+ unsigned char md[MD5_DIGEST_LENGTH]; -+ rcu_read_lock(); -+ RAMBlock *block = QLIST_FIRST_RCU(&ram_list.blocks);/* Only check -'pc.ram' block */ -+ rcu_read_unlock(); -+ -+ MD5(block->host, block->used_length, md); -+ for(i = 0; i < MD5_DIGEST_LENGTH; i++) { -+ fprintf(stderr, "%02x", md[i]); -+ } -+ fprintf(stderr, "\n"); -+ error_report("end ram md5"); -+} -+ - void qemu_savevm_state_begin(QEMUFile *f, - const MigrationParams *params) - { -@@ -1056,6 +1079,10 @@ void qemu_savevm_state_complete_precopy(QEMUFile *f, -bool iterable_only) - save_section_header(f, se, QEMU_VM_SECTION_END); - - ret = se->ops->save_live_complete_precopy(f, se->opaque); -+ -+ fprintf(stderr, "after saving %s complete\n", se->idstr); -+ check_host_md5(); -+ - trace_savevm_section_end(se->idstr, se->section_id, ret); - save_section_footer(f, se); - if (ret < 0) { -@@ -1791,6 +1818,11 @@ static int qemu_loadvm_state_main(QEMUFile *f, -MigrationIncomingState *mis) - section_id, le->se->idstr); - return ret; - } -+ if (section_type == QEMU_VM_SECTION_END) { -+ error_report("after loading state section id %d(%s)", -+ section_id, le->se->idstr); -+ check_host_md5(); -+ } - if (!check_section_footer(f, le)) { - return -EINVAL; - } -@@ -1901,6 +1933,8 @@ int qemu_loadvm_state(QEMUFile *f) - } - - cpu_synchronize_all_post_init(); -+ error_report("%s: after cpu_synchronize_all_post_init\n", __func__); -+ check_host_md5(); - - return ret; - } --- -Dr. David Alan Gilbert / address@hidden / Manchester, UK - -. - -On 12/03/2015 05:37 PM, Hailiang Zhang wrote: -On 2015/12/3 17:24, Dr. David Alan Gilbert wrote: -* Li Zhijian (address@hidden) wrote: -Hi all, - -Does anyboday remember the similar issue post by hailiang months ago -http://patchwork.ozlabs.org/patch/454322/ -At least tow bugs about migration had been fixed since that. -Yes, I wondered what happened to that. -And now we found the same issue at the tcg vm(kvm is fine), after -migration, -the content VM's memory is inconsistent. -Hmm, TCG only - I don't know much about that; but I guess something must -be accessing memory without using the proper macros/functions so -it doesn't mark it as dirty. -we add a patch to check memory content, you can find it from affix - -steps to reporduce: -1) apply the patch and re-build qemu -2) prepare the ubuntu guest and run memtest in grub. -soruce side: -x86_64-softmmu/qemu-system-x86_64 -netdev tap,id=hn0 -device -e1000,id=net-pci0,netdev=hn0,mac=52:54:00:12:34:65 -boot c -drive -if=none,file=/home/lizj/ubuntu.raw,id=drive-virtio-disk0 -device -virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 - --vnc :7 -m 128 -smp 1 -device piix3-usb-uhci -device usb-tablet -qmp -tcp::4444,server,nowait -monitor stdio -cpu qemu64 -machine -pc-i440fx-2.3,accel=tcg,usb=off - -destination side: -x86_64-softmmu/qemu-system-x86_64 -netdev tap,id=hn0 -device -e1000,id=net-pci0,netdev=hn0,mac=52:54:00:12:34:65 -boot c -drive -if=none,file=/home/lizj/ubuntu.raw,id=drive-virtio-disk0 -device -virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 - --vnc :7 -m 128 -smp 1 -device piix3-usb-uhci -device usb-tablet -qmp -tcp::4444,server,nowait -monitor stdio -cpu qemu64 -machine -pc-i440fx-2.3,accel=tcg,usb=off -incoming tcp:0:8881 - -3) start migration -with 1000M NIC, migration will finish within 3 min. - -at source: -(qemu) migrate tcp:192.168.2.66:8881 -after saving ram complete -e9e725df678d392b1a83b3a917f332bb -qemu-system-x86_64: end ram md5 -(qemu) - -at destination: -...skip... -Completed load of VM with exit code 0 seq iteration 1264 -Completed load of VM with exit code 0 seq iteration 1265 -Completed load of VM with exit code 0 seq iteration 1266 -qemu-system-x86_64: after loading state section id 2(ram) -49c2dac7bde0e5e22db7280dcb3824f9 -qemu-system-x86_64: end ram md5 -qemu-system-x86_64: qemu_loadvm_state: after -cpu_synchronize_all_post_init - -49c2dac7bde0e5e22db7280dcb3824f9 -qemu-system-x86_64: end ram md5 - -This occurs occasionally and only at tcg machine. It seems that -some pages dirtied in source side don't transferred to destination. -This problem can be reproduced even if we disable virtio. - -Is it OK for some pages that not transferred to destination when do -migration ? Or is it a bug? -I'm pretty sure that means it's a bug. Hard to find though, I guess -at least memtest is smaller than a big OS. I think I'd dump the whole -of memory on both sides, hexdump and diff them - I'd guess it would -just be one byte/word different, maybe that would offer some idea what -wrote it. -Maybe one better way to do that is with the help of userfaultfd's -write-protect -capability. It is still in the development by Andrea Arcangeli, but there -is a RFC version available, please refer to -http://www.spinics.net/lists/linux-mm/msg97422.html -ï¼I'm developing live memory snapshot which based on it, maybe this is -another scene where we -can use userfaultfd's WP ;) ). -sounds good. - -thanks -Li -Dave -Any idea... - -=================md5 check patch============================= - -diff --git a/Makefile.target b/Makefile.target -index 962d004..e2cb8e9 100644 ---- a/Makefile.target -+++ b/Makefile.target -@@ -139,7 +139,7 @@ obj-y += memory.o cputlb.o - obj-y += memory_mapping.o - obj-y += dump.o - obj-y += migration/ram.o migration/savevm.o --LIBS := $(libs_softmmu) $(LIBS) -+LIBS := $(libs_softmmu) $(LIBS) -lplumb - - # xen support - obj-$(CONFIG_XEN) += xen-common.o -diff --git a/migration/ram.c b/migration/ram.c -index 1eb155a..3b7a09d 100644 ---- a/migration/ram.c -+++ b/migration/ram.c -@@ -2513,7 +2513,7 @@ static int ram_load(QEMUFile *f, void *opaque, int -version_id) - } - - rcu_read_unlock(); -- DPRINTF("Completed load of VM with exit code %d seq iteration " -+ fprintf(stderr, "Completed load of VM with exit code %d seq -iteration " - "%" PRIu64 "\n", ret, seq_iter); - return ret; - } -diff --git a/migration/savevm.c b/migration/savevm.c -index 0ad1b93..3feaa61 100644 ---- a/migration/savevm.c -+++ b/migration/savevm.c -@@ -891,6 +891,29 @@ void qemu_savevm_state_header(QEMUFile *f) - - } - -+#include "exec/ram_addr.h" -+#include "qemu/rcu_queue.h" -+#include <clplumbing/md5.h> -+#ifndef MD5_DIGEST_LENGTH -+#define MD5_DIGEST_LENGTH 16 -+#endif -+ -+static void check_host_md5(void) -+{ -+ int i; -+ unsigned char md[MD5_DIGEST_LENGTH]; -+ rcu_read_lock(); -+ RAMBlock *block = QLIST_FIRST_RCU(&ram_list.blocks);/* Only check -'pc.ram' block */ -+ rcu_read_unlock(); -+ -+ MD5(block->host, block->used_length, md); -+ for(i = 0; i < MD5_DIGEST_LENGTH; i++) { -+ fprintf(stderr, "%02x", md[i]); -+ } -+ fprintf(stderr, "\n"); -+ error_report("end ram md5"); -+} -+ - void qemu_savevm_state_begin(QEMUFile *f, - const MigrationParams *params) - { -@@ -1056,6 +1079,10 @@ void -qemu_savevm_state_complete_precopy(QEMUFile *f, -bool iterable_only) - save_section_header(f, se, QEMU_VM_SECTION_END); - - ret = se->ops->save_live_complete_precopy(f, se->opaque); -+ -+ fprintf(stderr, "after saving %s complete\n", se->idstr); -+ check_host_md5(); -+ - trace_savevm_section_end(se->idstr, se->section_id, ret); - save_section_footer(f, se); - if (ret < 0) { -@@ -1791,6 +1818,11 @@ static int qemu_loadvm_state_main(QEMUFile *f, -MigrationIncomingState *mis) - section_id, le->se->idstr); - return ret; - } -+ if (section_type == QEMU_VM_SECTION_END) { -+ error_report("after loading state section id %d(%s)", -+ section_id, le->se->idstr); -+ check_host_md5(); -+ } - if (!check_section_footer(f, le)) { - return -EINVAL; - } -@@ -1901,6 +1933,8 @@ int qemu_loadvm_state(QEMUFile *f) - } - - cpu_synchronize_all_post_init(); -+ error_report("%s: after cpu_synchronize_all_post_init\n", -__func__); -+ check_host_md5(); - - return ret; - } --- -Dr. David Alan Gilbert / address@hidden / Manchester, UK - -. -. --- -Best regards. -Li Zhijian (8555) - -On 12/03/2015 05:24 PM, Dr. David Alan Gilbert wrote: -* Li Zhijian (address@hidden) wrote: -Hi all, - -Does anyboday remember the similar issue post by hailiang months ago -http://patchwork.ozlabs.org/patch/454322/ -At least tow bugs about migration had been fixed since that. -Yes, I wondered what happened to that. -And now we found the same issue at the tcg vm(kvm is fine), after migration, -the content VM's memory is inconsistent. -Hmm, TCG only - I don't know much about that; but I guess something must -be accessing memory without using the proper macros/functions so -it doesn't mark it as dirty. -we add a patch to check memory content, you can find it from affix - -steps to reporduce: -1) apply the patch and re-build qemu -2) prepare the ubuntu guest and run memtest in grub. -soruce side: -x86_64-softmmu/qemu-system-x86_64 -netdev tap,id=hn0 -device -e1000,id=net-pci0,netdev=hn0,mac=52:54:00:12:34:65 -boot c -drive -if=none,file=/home/lizj/ubuntu.raw,id=drive-virtio-disk0 -device -virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 --vnc :7 -m 128 -smp 1 -device piix3-usb-uhci -device usb-tablet -qmp -tcp::4444,server,nowait -monitor stdio -cpu qemu64 -machine -pc-i440fx-2.3,accel=tcg,usb=off - -destination side: -x86_64-softmmu/qemu-system-x86_64 -netdev tap,id=hn0 -device -e1000,id=net-pci0,netdev=hn0,mac=52:54:00:12:34:65 -boot c -drive -if=none,file=/home/lizj/ubuntu.raw,id=drive-virtio-disk0 -device -virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 --vnc :7 -m 128 -smp 1 -device piix3-usb-uhci -device usb-tablet -qmp -tcp::4444,server,nowait -monitor stdio -cpu qemu64 -machine -pc-i440fx-2.3,accel=tcg,usb=off -incoming tcp:0:8881 - -3) start migration -with 1000M NIC, migration will finish within 3 min. - -at source: -(qemu) migrate tcp:192.168.2.66:8881 -after saving ram complete -e9e725df678d392b1a83b3a917f332bb -qemu-system-x86_64: end ram md5 -(qemu) - -at destination: -...skip... -Completed load of VM with exit code 0 seq iteration 1264 -Completed load of VM with exit code 0 seq iteration 1265 -Completed load of VM with exit code 0 seq iteration 1266 -qemu-system-x86_64: after loading state section id 2(ram) -49c2dac7bde0e5e22db7280dcb3824f9 -qemu-system-x86_64: end ram md5 -qemu-system-x86_64: qemu_loadvm_state: after cpu_synchronize_all_post_init - -49c2dac7bde0e5e22db7280dcb3824f9 -qemu-system-x86_64: end ram md5 - -This occurs occasionally and only at tcg machine. It seems that -some pages dirtied in source side don't transferred to destination. -This problem can be reproduced even if we disable virtio. - -Is it OK for some pages that not transferred to destination when do -migration ? Or is it a bug? -I'm pretty sure that means it's a bug. Hard to find though, I guess -at least memtest is smaller than a big OS. I think I'd dump the whole -of memory on both sides, hexdump and diff them - I'd guess it would -just be one byte/word different, maybe that would offer some idea what -wrote it. -I try to dump and compare them, more than 10 pages are different. -in source side, they are random value rather than always 'FF' 'FB' 'EF' -'BF'... in destination. -and not all of the different pages are continuous. - -thanks -Li -Dave -Any idea... - -=================md5 check patch============================= - -diff --git a/Makefile.target b/Makefile.target -index 962d004..e2cb8e9 100644 ---- a/Makefile.target -+++ b/Makefile.target -@@ -139,7 +139,7 @@ obj-y += memory.o cputlb.o - obj-y += memory_mapping.o - obj-y += dump.o - obj-y += migration/ram.o migration/savevm.o --LIBS := $(libs_softmmu) $(LIBS) -+LIBS := $(libs_softmmu) $(LIBS) -lplumb - - # xen support - obj-$(CONFIG_XEN) += xen-common.o -diff --git a/migration/ram.c b/migration/ram.c -index 1eb155a..3b7a09d 100644 ---- a/migration/ram.c -+++ b/migration/ram.c -@@ -2513,7 +2513,7 @@ static int ram_load(QEMUFile *f, void *opaque, int -version_id) - } - - rcu_read_unlock(); -- DPRINTF("Completed load of VM with exit code %d seq iteration " -+ fprintf(stderr, "Completed load of VM with exit code %d seq iteration " - "%" PRIu64 "\n", ret, seq_iter); - return ret; - } -diff --git a/migration/savevm.c b/migration/savevm.c -index 0ad1b93..3feaa61 100644 ---- a/migration/savevm.c -+++ b/migration/savevm.c -@@ -891,6 +891,29 @@ void qemu_savevm_state_header(QEMUFile *f) - - } - -+#include "exec/ram_addr.h" -+#include "qemu/rcu_queue.h" -+#include <clplumbing/md5.h> -+#ifndef MD5_DIGEST_LENGTH -+#define MD5_DIGEST_LENGTH 16 -+#endif -+ -+static void check_host_md5(void) -+{ -+ int i; -+ unsigned char md[MD5_DIGEST_LENGTH]; -+ rcu_read_lock(); -+ RAMBlock *block = QLIST_FIRST_RCU(&ram_list.blocks);/* Only check -'pc.ram' block */ -+ rcu_read_unlock(); -+ -+ MD5(block->host, block->used_length, md); -+ for(i = 0; i < MD5_DIGEST_LENGTH; i++) { -+ fprintf(stderr, "%02x", md[i]); -+ } -+ fprintf(stderr, "\n"); -+ error_report("end ram md5"); -+} -+ - void qemu_savevm_state_begin(QEMUFile *f, - const MigrationParams *params) - { -@@ -1056,6 +1079,10 @@ void qemu_savevm_state_complete_precopy(QEMUFile *f, -bool iterable_only) - save_section_header(f, se, QEMU_VM_SECTION_END); - - ret = se->ops->save_live_complete_precopy(f, se->opaque); -+ -+ fprintf(stderr, "after saving %s complete\n", se->idstr); -+ check_host_md5(); -+ - trace_savevm_section_end(se->idstr, se->section_id, ret); - save_section_footer(f, se); - if (ret < 0) { -@@ -1791,6 +1818,11 @@ static int qemu_loadvm_state_main(QEMUFile *f, -MigrationIncomingState *mis) - section_id, le->se->idstr); - return ret; - } -+ if (section_type == QEMU_VM_SECTION_END) { -+ error_report("after loading state section id %d(%s)", -+ section_id, le->se->idstr); -+ check_host_md5(); -+ } - if (!check_section_footer(f, le)) { - return -EINVAL; - } -@@ -1901,6 +1933,8 @@ int qemu_loadvm_state(QEMUFile *f) - } - - cpu_synchronize_all_post_init(); -+ error_report("%s: after cpu_synchronize_all_post_init\n", __func__); -+ check_host_md5(); - - return ret; - } --- -Dr. David Alan Gilbert / address@hidden / Manchester, UK - - -. --- -Best regards. -Li Zhijian (8555) - -* Li Zhijian (address@hidden) wrote: -> -> -> -On 12/03/2015 05:24 PM, Dr. David Alan Gilbert wrote: -> ->* Li Zhijian (address@hidden) wrote: -> ->>Hi all, -> ->> -> ->>Does anyboday remember the similar issue post by hailiang months ago -> ->> -http://patchwork.ozlabs.org/patch/454322/ -> ->>At least tow bugs about migration had been fixed since that. -> -> -> ->Yes, I wondered what happened to that. -> -> -> ->>And now we found the same issue at the tcg vm(kvm is fine), after migration, -> ->>the content VM's memory is inconsistent. -> -> -> ->Hmm, TCG only - I don't know much about that; but I guess something must -> ->be accessing memory without using the proper macros/functions so -> ->it doesn't mark it as dirty. -> -> -> ->>we add a patch to check memory content, you can find it from affix -> ->> -> ->>steps to reporduce: -> ->>1) apply the patch and re-build qemu -> ->>2) prepare the ubuntu guest and run memtest in grub. -> ->>soruce side: -> ->>x86_64-softmmu/qemu-system-x86_64 -netdev tap,id=hn0 -device -> ->>e1000,id=net-pci0,netdev=hn0,mac=52:54:00:12:34:65 -boot c -drive -> ->>if=none,file=/home/lizj/ubuntu.raw,id=drive-virtio-disk0 -device -> ->>virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 -> ->>-vnc :7 -m 128 -smp 1 -device piix3-usb-uhci -device usb-tablet -qmp -> ->>tcp::4444,server,nowait -monitor stdio -cpu qemu64 -machine -> ->>pc-i440fx-2.3,accel=tcg,usb=off -> ->> -> ->>destination side: -> ->>x86_64-softmmu/qemu-system-x86_64 -netdev tap,id=hn0 -device -> ->>e1000,id=net-pci0,netdev=hn0,mac=52:54:00:12:34:65 -boot c -drive -> ->>if=none,file=/home/lizj/ubuntu.raw,id=drive-virtio-disk0 -device -> ->>virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 -> ->>-vnc :7 -m 128 -smp 1 -device piix3-usb-uhci -device usb-tablet -qmp -> ->>tcp::4444,server,nowait -monitor stdio -cpu qemu64 -machine -> ->>pc-i440fx-2.3,accel=tcg,usb=off -incoming tcp:0:8881 -> ->> -> ->>3) start migration -> ->>with 1000M NIC, migration will finish within 3 min. -> ->> -> ->>at source: -> ->>(qemu) migrate tcp:192.168.2.66:8881 -> ->>after saving ram complete -> ->>e9e725df678d392b1a83b3a917f332bb -> ->>qemu-system-x86_64: end ram md5 -> ->>(qemu) -> ->> -> ->>at destination: -> ->>...skip... -> ->>Completed load of VM with exit code 0 seq iteration 1264 -> ->>Completed load of VM with exit code 0 seq iteration 1265 -> ->>Completed load of VM with exit code 0 seq iteration 1266 -> ->>qemu-system-x86_64: after loading state section id 2(ram) -> ->>49c2dac7bde0e5e22db7280dcb3824f9 -> ->>qemu-system-x86_64: end ram md5 -> ->>qemu-system-x86_64: qemu_loadvm_state: after cpu_synchronize_all_post_init -> ->> -> ->>49c2dac7bde0e5e22db7280dcb3824f9 -> ->>qemu-system-x86_64: end ram md5 -> ->> -> ->>This occurs occasionally and only at tcg machine. It seems that -> ->>some pages dirtied in source side don't transferred to destination. -> ->>This problem can be reproduced even if we disable virtio. -> ->> -> ->>Is it OK for some pages that not transferred to destination when do -> ->>migration ? Or is it a bug? -> -> -> ->I'm pretty sure that means it's a bug. Hard to find though, I guess -> ->at least memtest is smaller than a big OS. I think I'd dump the whole -> ->of memory on both sides, hexdump and diff them - I'd guess it would -> ->just be one byte/word different, maybe that would offer some idea what -> ->wrote it. -> -> -I try to dump and compare them, more than 10 pages are different. -> -in source side, they are random value rather than always 'FF' 'FB' 'EF' -> -'BF'... in destination. -> -> -and not all of the different pages are continuous. -I wonder if it happens on all of memtest's different test patterns, -perhaps it might be possible to narrow it down if you tell memtest -to only run one test at a time. - -Dave - -> -> -thanks -> -Li -> -> -> -> -> ->Dave -> -> -> ->>Any idea... -> ->> -> ->>=================md5 check patch============================= -> ->> -> ->>diff --git a/Makefile.target b/Makefile.target -> ->>index 962d004..e2cb8e9 100644 -> ->>--- a/Makefile.target -> ->>+++ b/Makefile.target -> ->>@@ -139,7 +139,7 @@ obj-y += memory.o cputlb.o -> ->> obj-y += memory_mapping.o -> ->> obj-y += dump.o -> ->> obj-y += migration/ram.o migration/savevm.o -> ->>-LIBS := $(libs_softmmu) $(LIBS) -> ->>+LIBS := $(libs_softmmu) $(LIBS) -lplumb -> ->> -> ->> # xen support -> ->> obj-$(CONFIG_XEN) += xen-common.o -> ->>diff --git a/migration/ram.c b/migration/ram.c -> ->>index 1eb155a..3b7a09d 100644 -> ->>--- a/migration/ram.c -> ->>+++ b/migration/ram.c -> ->>@@ -2513,7 +2513,7 @@ static int ram_load(QEMUFile *f, void *opaque, int -> ->>version_id) -> ->> } -> ->> -> ->> rcu_read_unlock(); -> ->>- DPRINTF("Completed load of VM with exit code %d seq iteration " -> ->>+ fprintf(stderr, "Completed load of VM with exit code %d seq iteration " -> ->> "%" PRIu64 "\n", ret, seq_iter); -> ->> return ret; -> ->> } -> ->>diff --git a/migration/savevm.c b/migration/savevm.c -> ->>index 0ad1b93..3feaa61 100644 -> ->>--- a/migration/savevm.c -> ->>+++ b/migration/savevm.c -> ->>@@ -891,6 +891,29 @@ void qemu_savevm_state_header(QEMUFile *f) -> ->> -> ->> } -> ->> -> ->>+#include "exec/ram_addr.h" -> ->>+#include "qemu/rcu_queue.h" -> ->>+#include <clplumbing/md5.h> -> ->>+#ifndef MD5_DIGEST_LENGTH -> ->>+#define MD5_DIGEST_LENGTH 16 -> ->>+#endif -> ->>+ -> ->>+static void check_host_md5(void) -> ->>+{ -> ->>+ int i; -> ->>+ unsigned char md[MD5_DIGEST_LENGTH]; -> ->>+ rcu_read_lock(); -> ->>+ RAMBlock *block = QLIST_FIRST_RCU(&ram_list.blocks);/* Only check -> ->>'pc.ram' block */ -> ->>+ rcu_read_unlock(); -> ->>+ -> ->>+ MD5(block->host, block->used_length, md); -> ->>+ for(i = 0; i < MD5_DIGEST_LENGTH; i++) { -> ->>+ fprintf(stderr, "%02x", md[i]); -> ->>+ } -> ->>+ fprintf(stderr, "\n"); -> ->>+ error_report("end ram md5"); -> ->>+} -> ->>+ -> ->> void qemu_savevm_state_begin(QEMUFile *f, -> ->> const MigrationParams *params) -> ->> { -> ->>@@ -1056,6 +1079,10 @@ void qemu_savevm_state_complete_precopy(QEMUFile *f, -> ->>bool iterable_only) -> ->> save_section_header(f, se, QEMU_VM_SECTION_END); -> ->> -> ->> ret = se->ops->save_live_complete_precopy(f, se->opaque); -> ->>+ -> ->>+ fprintf(stderr, "after saving %s complete\n", se->idstr); -> ->>+ check_host_md5(); -> ->>+ -> ->> trace_savevm_section_end(se->idstr, se->section_id, ret); -> ->> save_section_footer(f, se); -> ->> if (ret < 0) { -> ->>@@ -1791,6 +1818,11 @@ static int qemu_loadvm_state_main(QEMUFile *f, -> ->>MigrationIncomingState *mis) -> ->> section_id, le->se->idstr); -> ->> return ret; -> ->> } -> ->>+ if (section_type == QEMU_VM_SECTION_END) { -> ->>+ error_report("after loading state section id %d(%s)", -> ->>+ section_id, le->se->idstr); -> ->>+ check_host_md5(); -> ->>+ } -> ->> if (!check_section_footer(f, le)) { -> ->> return -EINVAL; -> ->> } -> ->>@@ -1901,6 +1933,8 @@ int qemu_loadvm_state(QEMUFile *f) -> ->> } -> ->> -> ->> cpu_synchronize_all_post_init(); -> ->>+ error_report("%s: after cpu_synchronize_all_post_init\n", __func__); -> ->>+ check_host_md5(); -> ->> -> ->> return ret; -> ->> } -> ->> -> ->> -> ->> -> ->-- -> ->Dr. David Alan Gilbert / address@hidden / Manchester, UK -> -> -> -> -> ->. -> -> -> -> --- -> -Best regards. -> -Li Zhijian (8555) -> -> --- -Dr. David Alan Gilbert / address@hidden / Manchester, UK - -Li Zhijian <address@hidden> wrote: -> -Hi all, -> -> -Does anyboday remember the similar issue post by hailiang months ago -> -http://patchwork.ozlabs.org/patch/454322/ -> -At least tow bugs about migration had been fixed since that. -> -> -And now we found the same issue at the tcg vm(kvm is fine), after -> -migration, the content VM's memory is inconsistent. -> -> -we add a patch to check memory content, you can find it from affix -> -> -steps to reporduce: -> -1) apply the patch and re-build qemu -> -2) prepare the ubuntu guest and run memtest in grub. -> -soruce side: -> -x86_64-softmmu/qemu-system-x86_64 -netdev tap,id=hn0 -device -> -e1000,id=net-pci0,netdev=hn0,mac=52:54:00:12:34:65 -boot c -drive -> -if=none,file=/home/lizj/ubuntu.raw,id=drive-virtio-disk0 -device -> -virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 -> --vnc :7 -m 128 -smp 1 -device piix3-usb-uhci -device usb-tablet -qmp -> -tcp::4444,server,nowait -monitor stdio -cpu qemu64 -machine -> -pc-i440fx-2.3,accel=tcg,usb=off -> -> -destination side: -> -x86_64-softmmu/qemu-system-x86_64 -netdev tap,id=hn0 -device -> -e1000,id=net-pci0,netdev=hn0,mac=52:54:00:12:34:65 -boot c -drive -> -if=none,file=/home/lizj/ubuntu.raw,id=drive-virtio-disk0 -device -> -virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 -> --vnc :7 -m 128 -smp 1 -device piix3-usb-uhci -device usb-tablet -qmp -> -tcp::4444,server,nowait -monitor stdio -cpu qemu64 -machine -> -pc-i440fx-2.3,accel=tcg,usb=off -incoming tcp:0:8881 -> -> -3) start migration -> -with 1000M NIC, migration will finish within 3 min. -> -> -at source: -> -(qemu) migrate tcp:192.168.2.66:8881 -> -after saving ram complete -> -e9e725df678d392b1a83b3a917f332bb -> -qemu-system-x86_64: end ram md5 -> -(qemu) -> -> -at destination: -> -...skip... -> -Completed load of VM with exit code 0 seq iteration 1264 -> -Completed load of VM with exit code 0 seq iteration 1265 -> -Completed load of VM with exit code 0 seq iteration 1266 -> -qemu-system-x86_64: after loading state section id 2(ram) -> -49c2dac7bde0e5e22db7280dcb3824f9 -> -qemu-system-x86_64: end ram md5 -> -qemu-system-x86_64: qemu_loadvm_state: after cpu_synchronize_all_post_init -> -> -49c2dac7bde0e5e22db7280dcb3824f9 -> -qemu-system-x86_64: end ram md5 -> -> -This occurs occasionally and only at tcg machine. It seems that -> -some pages dirtied in source side don't transferred to destination. -> -This problem can be reproduced even if we disable virtio. -> -> -Is it OK for some pages that not transferred to destination when do -> -migration ? Or is it a bug? -> -> -Any idea... -Thanks for describing how to reproduce the bug. -If some pages are not transferred to destination then it is a bug, so we -need to know what the problem is, notice that the problem can be that -TCG is not marking dirty some page, that Migration code "forgets" about -that page, or anything eles altogether, that is what we need to find. - -There are more posibilities, I am not sure that memtest is on 32bit -mode, and it is inside posibility that we are missing some state when we -are on real mode. - -Will try to take a look at this. - -THanks, again. - - -> -> -=================md5 check patch============================= -> -> -diff --git a/Makefile.target b/Makefile.target -> -index 962d004..e2cb8e9 100644 -> ---- a/Makefile.target -> -+++ b/Makefile.target -> -@@ -139,7 +139,7 @@ obj-y += memory.o cputlb.o -> -obj-y += memory_mapping.o -> -obj-y += dump.o -> -obj-y += migration/ram.o migration/savevm.o -> --LIBS := $(libs_softmmu) $(LIBS) -> -+LIBS := $(libs_softmmu) $(LIBS) -lplumb -> -> -# xen support -> -obj-$(CONFIG_XEN) += xen-common.o -> -diff --git a/migration/ram.c b/migration/ram.c -> -index 1eb155a..3b7a09d 100644 -> ---- a/migration/ram.c -> -+++ b/migration/ram.c -> -@@ -2513,7 +2513,7 @@ static int ram_load(QEMUFile *f, void *opaque, -> -int version_id) -> -} -> -> -rcu_read_unlock(); -> -- DPRINTF("Completed load of VM with exit code %d seq iteration " -> -+ fprintf(stderr, "Completed load of VM with exit code %d seq iteration " -> -"%" PRIu64 "\n", ret, seq_iter); -> -return ret; -> -} -> -diff --git a/migration/savevm.c b/migration/savevm.c -> -index 0ad1b93..3feaa61 100644 -> ---- a/migration/savevm.c -> -+++ b/migration/savevm.c -> -@@ -891,6 +891,29 @@ void qemu_savevm_state_header(QEMUFile *f) -> -> -} -> -> -+#include "exec/ram_addr.h" -> -+#include "qemu/rcu_queue.h" -> -+#include <clplumbing/md5.h> -> -+#ifndef MD5_DIGEST_LENGTH -> -+#define MD5_DIGEST_LENGTH 16 -> -+#endif -> -+ -> -+static void check_host_md5(void) -> -+{ -> -+ int i; -> -+ unsigned char md[MD5_DIGEST_LENGTH]; -> -+ rcu_read_lock(); -> -+ RAMBlock *block = QLIST_FIRST_RCU(&ram_list.blocks);/* Only check -> -'pc.ram' block */ -> -+ rcu_read_unlock(); -> -+ -> -+ MD5(block->host, block->used_length, md); -> -+ for(i = 0; i < MD5_DIGEST_LENGTH; i++) { -> -+ fprintf(stderr, "%02x", md[i]); -> -+ } -> -+ fprintf(stderr, "\n"); -> -+ error_report("end ram md5"); -> -+} -> -+ -> -void qemu_savevm_state_begin(QEMUFile *f, -> -const MigrationParams *params) -> -{ -> -@@ -1056,6 +1079,10 @@ void -> -qemu_savevm_state_complete_precopy(QEMUFile *f, bool iterable_only) -> -save_section_header(f, se, QEMU_VM_SECTION_END); -> -> -ret = se->ops->save_live_complete_precopy(f, se->opaque); -> -+ -> -+ fprintf(stderr, "after saving %s complete\n", se->idstr); -> -+ check_host_md5(); -> -+ -> -trace_savevm_section_end(se->idstr, se->section_id, ret); -> -save_section_footer(f, se); -> -if (ret < 0) { -> -@@ -1791,6 +1818,11 @@ static int qemu_loadvm_state_main(QEMUFile *f, -> -MigrationIncomingState *mis) -> -section_id, le->se->idstr); -> -return ret; -> -} -> -+ if (section_type == QEMU_VM_SECTION_END) { -> -+ error_report("after loading state section id %d(%s)", -> -+ section_id, le->se->idstr); -> -+ check_host_md5(); -> -+ } -> -if (!check_section_footer(f, le)) { -> -return -EINVAL; -> -} -> -@@ -1901,6 +1933,8 @@ int qemu_loadvm_state(QEMUFile *f) -> -} -> -> -cpu_synchronize_all_post_init(); -> -+ error_report("%s: after cpu_synchronize_all_post_init\n", __func__); -> -+ check_host_md5(); -> -> -return ret; -> -} - -> -> -Thanks for describing how to reproduce the bug. -> -If some pages are not transferred to destination then it is a bug, so we need -> -to know what the problem is, notice that the problem can be that TCG is not -> -marking dirty some page, that Migration code "forgets" about that page, or -> -anything eles altogether, that is what we need to find. -> -> -There are more posibilities, I am not sure that memtest is on 32bit mode, and -> -it is inside posibility that we are missing some state when we are on real -> -mode. -> -> -Will try to take a look at this. -> -> -THanks, again. -> -Hi Juan & Amit - - Do you think we should add a mechanism to check the data integrity during LM -like Zhijian's patch did? it may be very helpful for developers. - Actually, I did the similar thing before in order to make sure that I did the -right thing we I change the code related to LM. - -Liang - -On (Fri) 04 Dec 2015 [01:43:07], Li, Liang Z wrote: -> -> -> -> Thanks for describing how to reproduce the bug. -> -> If some pages are not transferred to destination then it is a bug, so we -> -> need -> -> to know what the problem is, notice that the problem can be that TCG is not -> -> marking dirty some page, that Migration code "forgets" about that page, or -> -> anything eles altogether, that is what we need to find. -> -> -> -> There are more posibilities, I am not sure that memtest is on 32bit mode, -> -> and -> -> it is inside posibility that we are missing some state when we are on real -> -> mode. -> -> -> -> Will try to take a look at this. -> -> -> -> THanks, again. -> -> -> -> -Hi Juan & Amit -> -> -Do you think we should add a mechanism to check the data integrity during LM -> -like Zhijian's patch did? it may be very helpful for developers. -> -Actually, I did the similar thing before in order to make sure that I did -> -the right thing we I change the code related to LM. -If you mean for debugging, something that's not always on, then I'm -fine with it. - -A script that goes along that shows the result of comparison of the -diff will be helpful too, something that shows how many pages are -differnt, how many bytes in a page on average, and so on. - - Amit - diff --git a/results/classifier/013/user-level/80615920 b/results/classifier/013/user-level/80615920 deleted file mode 100644 index 1e235c855..000000000 --- a/results/classifier/013/user-level/80615920 +++ /dev/null @@ -1,376 +0,0 @@ -user-level: 0.849 -risc-v: 0.809 -KVM: 0.803 -mistranslation: 0.800 -TCG: 0.785 -x86: 0.779 -operating system: 0.777 -peripherals: 0.777 -i386: 0.773 -vnc: 0.768 -ppc: 0.768 -hypervisor: 0.764 -VMM: 0.759 -performance: 0.758 -permissions: 0.758 -register: 0.756 -architecture: 0.755 -files: 0.751 -boot: 0.750 -virtual: 0.749 -device: 0.748 -assembly: 0.747 -system: 0.747 -debug: 0.746 -arm: 0.744 -kernel: 0.738 -semantic: 0.737 -network: 0.732 -socket: 0.732 -graphic: 0.730 -PID: 0.727 -alpha: 0.726 - -[BUG] accel/tcg: cpu_exec_longjmp_cleanup: assertion failed: (cpu == current_cpu) - -It seems there is a bug in SIGALRM handling when 486 system emulates x86_64 -code. - -This code: - -#include <stdio.h> -#include <stdlib.h> -#include <pthread.h> -#include <signal.h> -#include <unistd.h> - -pthread_t thread1, thread2; - -// Signal handler for SIGALRM -void alarm_handler(int sig) { - // Do nothing, just wake up the other thread -} - -// Thread 1 function -void* thread1_func(void* arg) { - // Set up the signal handler for SIGALRM - signal(SIGALRM, alarm_handler); - - // Wait for 5 seconds - sleep(1); - - // Send SIGALRM signal to thread 2 - pthread_kill(thread2, SIGALRM); - - return NULL; -} - -// Thread 2 function -void* thread2_func(void* arg) { - // Wait for the SIGALRM signal - pause(); - - printf("Thread 2 woke up!\n"); - - return NULL; -} - -int main() { - // Create thread 1 - if (pthread_create(&thread1, NULL, thread1_func, NULL) != 0) { - fprintf(stderr, "Failed to create thread 1\n"); - return 1; - } - - // Create thread 2 - if (pthread_create(&thread2, NULL, thread2_func, NULL) != 0) { - fprintf(stderr, "Failed to create thread 2\n"); - return 1; - } - - // Wait for both threads to finish - pthread_join(thread1, NULL); - pthread_join(thread2, NULL); - - return 0; -} - - -Fails with this -strace log (there are also unsupported syscalls 334 and 435, -but it seems it doesn't affect the code much): - -... -736 rt_sigaction(SIGALRM,0x000000001123ec20,0x000000001123ecc0) = 0 -736 clock_nanosleep(CLOCK_REALTIME,0,{tv_sec = 1,tv_nsec = 0},{tv_sec = -1,tv_nsec = 0}) -736 rt_sigprocmask(SIG_BLOCK,0x00000000109fad20,0x0000000010800b38,8) = 0 -736 Unknown syscall 435 -736 -clone(CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID| - ... -736 rt_sigprocmask(SIG_SETMASK,0x0000000010800b38,NULL,8) -736 set_robust_list(0x11a419a0,0) = -1 errno=38 (Function not implemented) -736 rt_sigprocmask(SIG_SETMASK,0x0000000011a41fb0,NULL,8) = 0 - = 0 -736 pause(0,0,2,277186368,0,295966400) -736 -futex(0x000000001123f990,FUTEX_CLOCK_REALTIME|FUTEX_WAIT_BITSET,738,NULL,NULL,0) - = 0 -736 rt_sigprocmask(SIG_BLOCK,0x00000000109fad20,0x000000001123ee88,8) = 0 -736 getpid() = 736 -736 tgkill(736,739,SIGALRM) = 0 - = -1 errno=4 (Interrupted system call) ---- SIGALRM {si_signo=SIGALRM, si_code=SI_TKILL, si_pid=736, si_uid=0} --- -0x48874a != 0x3c69e10 -736 rt_sigprocmask(SIG_SETMASK,0x000000001123ee88,NULL,8) = 0 -** -ERROR:../accel/tcg/cpu-exec.c:546:cpu_exec_longjmp_cleanup: assertion failed: -(cpu == current_cpu) -Bail out! ERROR:../accel/tcg/cpu-exec.c:546:cpu_exec_longjmp_cleanup: assertion -failed: (cpu == current_cpu) -0x48874a != 0x3c69e10 -** -ERROR:../accel/tcg/cpu-exec.c:546:cpu_exec_longjmp_cleanup: assertion failed: -(cpu == current_cpu) -Bail out! ERROR:../accel/tcg/cpu-exec.c:546:cpu_exec_longjmp_cleanup: assertion -failed: (cpu == current_cpu) -# - -The code fails either with or without -singlestep, the command line: - -/usr/bin/qemu-x86_64 -L /opt/x86_64 -strace -singlestep /opt/x86_64/alarm.bin - -Source code of QEMU 8.1.1 was modified with patch "[PATCH] qemu/timer: Don't -use RDTSC on i486" [1], -with added few ioctls (not relevant) and cpu_exec_longjmp_cleanup() now prints -current pointers of -cpu and current_cpu (line "0x48874a != 0x3c69e10"). - -config.log (built as a part of buildroot, basically the minimal possible -configuration for running x86_64 on 486): - -# Configured with: -'/mnt/hd_8tb_p1/p1/home/crossgen/buildroot_486_2/output/build/qemu-8.1.1/configure' - '--prefix=/usr' -'--cross-prefix=/mnt/hd_8tb_p1/p1/home/crossgen/buildroot_486_2/output/host/bin/i486-buildroot-linux-gnu-' - '--audio-drv-list=' -'--python=/mnt/hd_8tb_p1/p1/home/crossgen/buildroot_486_2/output/host/bin/python3' - -'--ninja=/mnt/hd_8tb_p1/p1/home/crossgen/buildroot_486_2/output/host/bin/ninja' -'--disable-alsa' '--disable-bpf' '--disable-brlapi' '--disable-bsd-user' -'--disable-cap-ng' '--disable-capstone' '--disable-containers' -'--disable-coreaudio' '--disable-curl' '--disable-curses' -'--disable-dbus-display' '--disable-docs' '--disable-dsound' '--disable-hvf' -'--disable-jack' '--disable-libiscsi' '--disable-linux-aio' -'--disable-linux-io-uring' '--disable-malloc-trim' '--disable-membarrier' -'--disable-mpath' '--disable-netmap' '--disable-opengl' '--disable-oss' -'--disable-pa' '--disable-rbd' '--disable-sanitizers' '--disable-selinux' -'--disable-sparse' '--disable-strip' '--disable-vde' '--disable-vhost-crypto' -'--disable-vhost-user-blk-server' '--disable-virtfs' '--disable-whpx' -'--disable-xen' '--disable-attr' '--disable-kvm' '--disable-vhost-net' -'--disable-download' '--disable-hexagon-idef-parser' '--disable-system' -'--enable-linux-user' '--target-list=x86_64-linux-user' '--disable-vhost-user' -'--disable-slirp' '--disable-sdl' '--disable-fdt' '--enable-trace-backends=nop' -'--disable-tools' '--disable-guest-agent' '--disable-fuse' -'--disable-fuse-lseek' '--disable-seccomp' '--disable-libssh' -'--disable-libusb' '--disable-vnc' '--disable-nettle' '--disable-numa' -'--disable-pipewire' '--disable-spice' '--disable-usb-redir' -'--disable-install-blobs' - -Emulation of the same x86_64 code with qemu 6.2.0 installed on another x86_64 -native machine works fine. - -[1] -https://lists.nongnu.org/archive/html/qemu-devel/2023-11/msg05387.html -Best regards, -Petr - -On Sat, 25 Nov 2023 at 13:09, Petr Cvek <petrcvekcz@gmail.com> wrote: -> -> -It seems there is a bug in SIGALRM handling when 486 system emulates x86_64 -> -code. -486 host is pretty well out of support currently. Can you reproduce -this on a less ancient host CPU type ? - -> -ERROR:../accel/tcg/cpu-exec.c:546:cpu_exec_longjmp_cleanup: assertion failed: -> -(cpu == current_cpu) -> -Bail out! ERROR:../accel/tcg/cpu-exec.c:546:cpu_exec_longjmp_cleanup: -> -assertion failed: (cpu == current_cpu) -> -0x48874a != 0x3c69e10 -> -** -> -ERROR:../accel/tcg/cpu-exec.c:546:cpu_exec_longjmp_cleanup: assertion failed: -> -(cpu == current_cpu) -> -Bail out! ERROR:../accel/tcg/cpu-exec.c:546:cpu_exec_longjmp_cleanup: -> -assertion failed: (cpu == current_cpu) -What compiler version do you build QEMU with? That -assert is there because we have seen some buggy compilers -in the past which don't correctly preserve the variable -value as the setjmp/longjmp spec requires them to. - -thanks --- PMM - -Dne 27. 11. 23 v 10:37 Peter Maydell napsal(a): -> -On Sat, 25 Nov 2023 at 13:09, Petr Cvek <petrcvekcz@gmail.com> wrote: -> -> -> -> It seems there is a bug in SIGALRM handling when 486 system emulates x86_64 -> -> code. -> -> -486 host is pretty well out of support currently. Can you reproduce -> -this on a less ancient host CPU type ? -> -It seems it only fails when the code is compiled for i486. QEMU built with the -same compiler with -march=i586 and above runs on the same physical hardware -without a problem. All -march= variants were executed on ryzen 3600. - -> -> ERROR:../accel/tcg/cpu-exec.c:546:cpu_exec_longjmp_cleanup: assertion -> -> failed: (cpu == current_cpu) -> -> Bail out! ERROR:../accel/tcg/cpu-exec.c:546:cpu_exec_longjmp_cleanup: -> -> assertion failed: (cpu == current_cpu) -> -> 0x48874a != 0x3c69e10 -> -> ** -> -> ERROR:../accel/tcg/cpu-exec.c:546:cpu_exec_longjmp_cleanup: assertion -> -> failed: (cpu == current_cpu) -> -> Bail out! ERROR:../accel/tcg/cpu-exec.c:546:cpu_exec_longjmp_cleanup: -> -> assertion failed: (cpu == current_cpu) -> -> -What compiler version do you build QEMU with? That -> -assert is there because we have seen some buggy compilers -> -in the past which don't correctly preserve the variable -> -value as the setjmp/longjmp spec requires them to. -> -i486 and i586+ code variants were compiled with GCC 13.2.0 (more exactly, -slackware64 current multilib distribution). - -i486 binary which runs on the real 486 is also GCC 13.2.0 and installed as a -part of the buildroot crosscompiler (about two week old git snapshot). - -> -thanks -> --- PMM -best regards, -Petr - -On 11/25/23 07:08, Petr Cvek wrote: -ERROR:../accel/tcg/cpu-exec.c:546:cpu_exec_longjmp_cleanup: assertion failed: -(cpu == current_cpu) -Bail out! ERROR:../accel/tcg/cpu-exec.c:546:cpu_exec_longjmp_cleanup: assertion -failed: (cpu == current_cpu) -# - -The code fails either with or without -singlestep, the command line: - -/usr/bin/qemu-x86_64 -L /opt/x86_64 -strace -singlestep /opt/x86_64/alarm.bin - -Source code of QEMU 8.1.1 was modified with patch "[PATCH] qemu/timer: Don't use -RDTSC on i486" [1], -with added few ioctls (not relevant) and cpu_exec_longjmp_cleanup() now prints -current pointers of -cpu and current_cpu (line "0x48874a != 0x3c69e10"). -If you try this again with 8.2-rc2, you should not see an assertion failure. -You should see instead - -QEMU internal SIGILL {code=ILLOPC, addr=0x12345678} -which I think more accurately summarizes the situation of attempting RDTSC on hardware -that does not support it. -r~ - -Dne 29. 11. 23 v 15:25 Richard Henderson napsal(a): -> -On 11/25/23 07:08, Petr Cvek wrote: -> -> ERROR:../accel/tcg/cpu-exec.c:546:cpu_exec_longjmp_cleanup: assertion -> -> failed: (cpu == current_cpu) -> -> Bail out! ERROR:../accel/tcg/cpu-exec.c:546:cpu_exec_longjmp_cleanup: -> -> assertion failed: (cpu == current_cpu) -> -> # -> -> -> -> The code fails either with or without -singlestep, the command line: -> -> -> -> /usr/bin/qemu-x86_64 -L /opt/x86_64 -strace -singlestep -> -> /opt/x86_64/alarm.bin -> -> -> -> Source code of QEMU 8.1.1 was modified with patch "[PATCH] qemu/timer: Don't -> -> use RDTSC on i486" [1], -> -> with added few ioctls (not relevant) and cpu_exec_longjmp_cleanup() now -> -> prints current pointers of -> -> cpu and current_cpu (line "0x48874a != 0x3c69e10"). -> -> -> -If you try this again with 8.2-rc2, you should not see an assertion failure. -> -You should see instead -> -> -QEMU internal SIGILL {code=ILLOPC, addr=0x12345678} -> -> -which I think more accurately summarizes the situation of attempting RDTSC on -> -hardware that does not support it. -> -> -Compilation of vanilla qemu v8.2.0-rc2 with -march=i486 by GCC 13.2.0 and -running the resulting binary on ryzen still leads to: - -** -ERROR:../accel/tcg/cpu-exec.c:533:cpu_exec_longjmp_cleanup: assertion failed: -(cpu == current_cpu) -Bail out! ERROR:../accel/tcg/cpu-exec.c:533:cpu_exec_longjmp_cleanup: assertion -failed: (cpu == current_cpu) -Aborted - -> -> -r~ -Petr - |