summary refs log tree commit diff stats
path: root/results/classifier/009/debug
diff options
context:
space:
mode:
authorChristian Krinitsin <mail@krinitsin.com>2025-07-03 19:39:53 +0200
committerChristian Krinitsin <mail@krinitsin.com>2025-07-03 19:39:53 +0200
commitdee4dcba78baf712cab403d47d9db319ab7f95d6 (patch)
tree418478faf06786701a56268672f73d6b0b4eb239 /results/classifier/009/debug
parent4d9e26c0333abd39bdbd039dcdb30ed429c475ba (diff)
downloadqemu-analysis-dee4dcba78baf712cab403d47d9db319ab7f95d6.tar.gz
qemu-analysis-dee4dcba78baf712cab403d47d9db319ab7f95d6.zip
restructure results
Diffstat (limited to 'results/classifier/009/debug')
-rw-r--r--results/classifier/009/debug/241903402066
-rw-r--r--results/classifier/009/debug/365680444591
-rw-r--r--results/classifier/009/debug/5356818188
-rw-r--r--results/classifier/009/debug/64571620795
4 files changed, 0 insertions, 7540 deletions
diff --git a/results/classifier/009/debug/24190340 b/results/classifier/009/debug/24190340
deleted file mode 100644
index 712d8fc51..000000000
--- a/results/classifier/009/debug/24190340
+++ /dev/null
@@ -1,2066 +0,0 @@
-debug: 0.876
-permissions: 0.851
-device: 0.832
-performance: 0.832
-other: 0.811
-vnc: 0.808
-boot: 0.803
-semantic: 0.793
-KVM: 0.776
-graphic: 0.775
-files: 0.770
-PID: 0.762
-network: 0.723
-socket: 0.715
-
-[BUG, RFC] Block graph deadlock on job-dismiss
-
-Hi all,
-
-There's a bug in block layer which leads to block graph deadlock.
-Notably, it takes place when blockdev IO is processed within a separate
-iothread.
-
-This was initially caught by our tests, and I was able to reduce it to a
-relatively simple reproducer.  Such deadlocks are probably supposed to
-be covered in iotests/graph-changes-while-io, but this deadlock isn't.
-
-Basically what the reproducer does is launches QEMU with a drive having
-'iothread' option set, creates a chain of 2 snapshots, launches
-block-commit job for a snapshot and then dismisses the job, starting
-from the lower snapshot.  If the guest is issuing IO at the same time,
-there's a race in acquiring block graph lock and a potential deadlock.
-
-Here's how it can be reproduced:
-
-1. Run QEMU:
->
-SRCDIR=/path/to/srcdir
->
->
->
->
->
-$SRCDIR/build/qemu-system-x86_64 -enable-kvm \
->
->
--machine q35 -cpu Nehalem \
->
->
--name guest=alma8-vm,debug-threads=on \
->
->
--m 2g -smp 2 \
->
->
--nographic -nodefaults \
->
->
--qmp unix:/var/run/alma8-qmp.sock,server=on,wait=off \
->
->
--serial unix:/var/run/alma8-serial.sock,server=on,wait=off \
->
->
--object iothread,id=iothread0 \
->
->
--blockdev
->
-node-name=disk,driver=qcow2,file.driver=file,file.filename=/path/to/img/alma8.qcow2
->
-\
->
--device virtio-blk-pci,drive=disk,iothread=iothread0
-2. Launch IO (random reads) from within the guest:
->
-nc -U /var/run/alma8-serial.sock
->
-...
->
-[root@alma8-vm ~]# fio --name=randread --ioengine=libaio --direct=1 --bs=4k
->
---size=1G --numjobs=1 --time_based=1 --runtime=300 --group_reporting
->
---rw=randread --iodepth=1 --filename=/testfile
-3. Run snapshots creation & removal of lower snapshot operation in a
-loop (script attached):
->
-while /bin/true ; do ./remove_lower_snap.sh ; done
-And then it occasionally hangs.
-
-Note: I've tried bisecting this, and looks like deadlock occurs starting
-from the following commit:
-
-(BAD)  5bdbaebcce virtio: Re-enable notifications after drain
-(GOOD) c42c3833e0 virtio-scsi: Attach event vq notifier with no_poll
-
-On the latest v10.0.0 it does hang as well.
-
-
-Here's backtrace of the main thread:
-
->
-#0  0x00007fc547d427ce in __ppoll (fds=0x557eb79657b0, nfds=1,
->
-timeout=<optimized out>, sigmask=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:43
->
-#1  0x0000557eb47d955c in qemu_poll_ns (fds=0x557eb79657b0, nfds=1,
->
-timeout=-1) at ../util/qemu-timer.c:329
->
-#2  0x0000557eb47b2204 in fdmon_poll_wait (ctx=0x557eb76c5f20,
->
-ready_list=0x7ffd94b4edd8, timeout=-1) at ../util/fdmon-poll.c:79
->
-#3  0x0000557eb47b1c45 in aio_poll (ctx=0x557eb76c5f20, blocking=true) at
->
-../util/aio-posix.c:730
->
-#4  0x0000557eb4621edd in bdrv_do_drained_begin (bs=0x557eb795e950,
->
-parent=0x0, poll=true) at ../block/io.c:378
->
-#5  0x0000557eb4621f7b in bdrv_drained_begin (bs=0x557eb795e950) at
->
-../block/io.c:391
->
-#6  0x0000557eb45ec125 in bdrv_change_aio_context (bs=0x557eb795e950,
->
-ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...}, tran=0x557eb7a87160,
->
-errp=0x0)
->
-at ../block.c:7682
->
-#7  0x0000557eb45ebf2b in bdrv_child_change_aio_context (c=0x557eb7964250,
->
-ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...}, tran=0x557eb7a87160,
->
-errp=0x0)
->
-at ../block.c:7608
->
-#8  0x0000557eb45ec0c4 in bdrv_change_aio_context (bs=0x557eb79575e0,
->
-ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...}, tran=0x557eb7a87160,
->
-errp=0x0)
->
-at ../block.c:7668
->
-#9  0x0000557eb45ebf2b in bdrv_child_change_aio_context (c=0x557eb7e59110,
->
-ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...}, tran=0x557eb7a87160,
->
-errp=0x0)
->
-at ../block.c:7608
->
-#10 0x0000557eb45ec0c4 in bdrv_change_aio_context (bs=0x557eb7e51960,
->
-ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...}, tran=0x557eb7a87160,
->
-errp=0x0)
->
-at ../block.c:7668
->
-#11 0x0000557eb45ebf2b in bdrv_child_change_aio_context (c=0x557eb814ed80,
->
-ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...}, tran=0x557eb7a87160,
->
-errp=0x0)
->
-at ../block.c:7608
->
-#12 0x0000557eb45ee8e4 in child_job_change_aio_ctx (c=0x557eb7c9d3f0,
->
-ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...}, tran=0x557eb7a87160,
->
-errp=0x0)
->
-at ../blockjob.c:157
->
-#13 0x0000557eb45ebe2d in bdrv_parent_change_aio_context (c=0x557eb7c9d3f0,
->
-ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...}, tran=0x557eb7a87160,
->
-errp=0x0)
->
-at ../block.c:7592
->
-#14 0x0000557eb45ec06b in bdrv_change_aio_context (bs=0x557eb7d74310,
->
-ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...}, tran=0x557eb7a87160,
->
-errp=0x0)
->
-at ../block.c:7661
->
-#15 0x0000557eb45dcd7e in bdrv_child_cb_change_aio_ctx
->
-(child=0x557eb8565af0, ctx=0x557eb76c5f20, visited=0x557eb7e06b60 =
->
-{...}, tran=0x557eb7a87160, errp=0x0) at ../block.c:1234
->
-#16 0x0000557eb45ebe2d in bdrv_parent_change_aio_context (c=0x557eb8565af0,
->
-ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...}, tran=0x557eb7a87160,
->
-errp=0x0)
->
-at ../block.c:7592
->
-#17 0x0000557eb45ec06b in bdrv_change_aio_context (bs=0x557eb79575e0,
->
-ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...}, tran=0x557eb7a87160,
->
-errp=0x0)
->
-at ../block.c:7661
->
-#18 0x0000557eb45ec1f3 in bdrv_try_change_aio_context (bs=0x557eb79575e0,
->
-ctx=0x557eb76c5f20, ignore_child=0x0, errp=0x0) at ../block.c:7715
->
-#19 0x0000557eb45e1b15 in bdrv_root_unref_child (child=0x557eb7966f30) at
->
-../block.c:3317
->
-#20 0x0000557eb45eeaa8 in block_job_remove_all_bdrv (job=0x557eb7952800) at
->
-../blockjob.c:209
->
-#21 0x0000557eb45ee641 in block_job_free (job=0x557eb7952800) at
->
-../blockjob.c:82
->
-#22 0x0000557eb45f17af in job_unref_locked (job=0x557eb7952800) at
->
-../job.c:474
->
-#23 0x0000557eb45f257d in job_do_dismiss_locked (job=0x557eb7952800) at
->
-../job.c:771
->
-#24 0x0000557eb45f25fe in job_dismiss_locked (jobptr=0x7ffd94b4f400,
->
-errp=0x7ffd94b4f488) at ../job.c:783
->
---Type <RET> for more, q to quit, c to continue without paging--
->
-#25 0x0000557eb45d8e84 in qmp_job_dismiss (id=0x557eb7aa42b0 "commit-snap1",
->
-errp=0x7ffd94b4f488) at ../job-qmp.c:138
->
-#26 0x0000557eb472f6a3 in qmp_marshal_job_dismiss (args=0x7fc52c00a3b0,
->
-ret=0x7fc53c880da8, errp=0x7fc53c880da0) at qapi/qapi-commands-job.c:221
->
-#27 0x0000557eb47a35f3 in do_qmp_dispatch_bh (opaque=0x7fc53c880e40) at
->
-../qapi/qmp-dispatch.c:128
->
-#28 0x0000557eb47d1cd2 in aio_bh_call (bh=0x557eb79568f0) at
->
-../util/async.c:172
->
-#29 0x0000557eb47d1df5 in aio_bh_poll (ctx=0x557eb76c0200) at
->
-../util/async.c:219
->
-#30 0x0000557eb47b12f3 in aio_dispatch (ctx=0x557eb76c0200) at
->
-../util/aio-posix.c:436
->
-#31 0x0000557eb47d2266 in aio_ctx_dispatch (source=0x557eb76c0200,
->
-callback=0x0, user_data=0x0) at ../util/async.c:361
->
-#32 0x00007fc549232f4f in g_main_dispatch (context=0x557eb76c6430) at
->
-../glib/gmain.c:3364
->
-#33 g_main_context_dispatch (context=0x557eb76c6430) at ../glib/gmain.c:4079
->
-#34 0x0000557eb47d3ab1 in glib_pollfds_poll () at ../util/main-loop.c:287
->
-#35 0x0000557eb47d3b38 in os_host_main_loop_wait (timeout=0) at
->
-../util/main-loop.c:310
->
-#36 0x0000557eb47d3c58 in main_loop_wait (nonblocking=0) at
->
-../util/main-loop.c:589
->
-#37 0x0000557eb4218b01 in qemu_main_loop () at ../system/runstate.c:835
->
-#38 0x0000557eb46df166 in qemu_default_main (opaque=0x0) at
->
-../system/main.c:50
->
-#39 0x0000557eb46df215 in main (argc=24, argv=0x7ffd94b4f8d8) at
->
-../system/main.c:80
-And here's coroutine trying to acquire read lock:
-
->
-(gdb) qemu coroutine reader_queue->entries.sqh_first
->
-#0  0x0000557eb47d7068 in qemu_coroutine_switch (from_=0x557eb7aa48b0,
->
-to_=0x7fc537fff508, action=COROUTINE_YIELD) at
->
-../util/coroutine-ucontext.c:321
->
-#1  0x0000557eb47d4d4a in qemu_coroutine_yield () at
->
-../util/qemu-coroutine.c:339
->
-#2  0x0000557eb47d56c8 in qemu_co_queue_wait_impl (queue=0x557eb59954c0
->
-<reader_queue>, lock=0x7fc53c57de50, flags=0) at
->
-../util/qemu-coroutine-lock.c:60
->
-#3  0x0000557eb461fea7 in bdrv_graph_co_rdlock () at ../block/graph-lock.c:231
->
-#4  0x0000557eb460c81a in graph_lockable_auto_lock (x=0x7fc53c57dee3) at
->
-/home/root/src/qemu/master/include/block/graph-lock.h:213
->
-#5  0x0000557eb460fa41 in blk_co_do_preadv_part
->
-(blk=0x557eb84c0810, offset=6890553344, bytes=4096, qiov=0x7fc530006988,
->
-qiov_offset=0, flags=BDRV_REQ_REGISTERED_BUF) at ../block/block-backend.c:1339
->
-#6  0x0000557eb46104d7 in blk_aio_read_entry (opaque=0x7fc530003240) at
->
-../block/block-backend.c:1619
->
-#7  0x0000557eb47d6c40 in coroutine_trampoline (i0=-1213577040, i1=21886) at
->
-../util/coroutine-ucontext.c:175
->
-#8  0x00007fc547c2a360 in __start_context () at
->
-../sysdeps/unix/sysv/linux/x86_64/__start_context.S:91
->
-#9  0x00007ffd94b4ea40 in  ()
->
-#10 0x0000000000000000 in  ()
-So it looks like main thread is processing job-dismiss request and is
-holding write lock taken in block_job_remove_all_bdrv() (frame #20
-above).  At the same time iothread spawns a coroutine which performs IO
-request.  Before the coroutine is spawned, blk_aio_prwv() increases
-'in_flight' counter for Blk.  Then blk_co_do_preadv_part() (frame #5) is
-trying to acquire the read lock.  But main thread isn't releasing the
-lock as blk_root_drained_poll() returns true since blk->in_flight > 0.
-Here's the deadlock.
-
-Any comments and suggestions on the subject are welcomed.  Thanks!
-
-Andrey
-remove_lower_snap.sh
-Description:
-application/shellscript
-
-On 4/24/25 8:32 PM, Andrey Drobyshev wrote:
->
-Hi all,
->
->
-There's a bug in block layer which leads to block graph deadlock.
->
-Notably, it takes place when blockdev IO is processed within a separate
->
-iothread.
->
->
-This was initially caught by our tests, and I was able to reduce it to a
->
-relatively simple reproducer.  Such deadlocks are probably supposed to
->
-be covered in iotests/graph-changes-while-io, but this deadlock isn't.
->
->
-Basically what the reproducer does is launches QEMU with a drive having
->
-'iothread' option set, creates a chain of 2 snapshots, launches
->
-block-commit job for a snapshot and then dismisses the job, starting
->
-from the lower snapshot.  If the guest is issuing IO at the same time,
->
-there's a race in acquiring block graph lock and a potential deadlock.
->
->
-Here's how it can be reproduced:
->
->
-[...]
->
-I took a closer look at iotests/graph-changes-while-io, and have managed
-to reproduce the same deadlock in a much simpler setup, without a guest.
-
-1. Run QSD:> ./build/storage-daemon/qemu-storage-daemon --object
-iothread,id=iothread0 \
->
---blockdev null-co,node-name=node0,read-zeroes=true \
->
->
---nbd-server addr.type=unix,addr.path=/var/run/qsd_nbd.sock \
->
->
---export
->
-nbd,id=exp0,node-name=node0,iothread=iothread0,fixed-iothread=true,writable=true
->
-\
->
---chardev
->
-socket,id=qmp-sock,path=/var/run/qsd_qmp.sock,server=on,wait=off \
->
---monitor chardev=qmp-sock
-2. Launch IO:
->
-qemu-img bench -f raw -c 2000000
->
-'nbd+unix:///node0?socket=/var/run/qsd_nbd.sock'
-3. Add 2 snapshots and remove lower one (script attached):> while
-/bin/true ; do ./rls_qsd.sh ; done
-
-And then it hangs.
-
-I'll also send a patch with corresponding test case added directly to
-iotests.
-
-This reproduce seems to be hanging starting from Fiona's commit
-67446e605dc ("blockjob: drop AioContext lock before calling
-bdrv_graph_wrlock()").  AioContext locks were dropped entirely later on
-in Stefan's commit b49f4755c7 ("block: remove AioContext locking"), but
-the problem remains.
-
-Andrey
-rls_qsd.sh
-Description:
-application/shellscript
-
-From: Andrey Drobyshev <andrey.drobyshev@virtuozzo.com>
-
-This case is catching potential deadlock which takes place when job-dismiss
-is issued when I/O requests are processed in a separate iothread.
-
-See
-https://mail.gnu.org/archive/html/qemu-devel/2025-04/msg04421.html
-Signed-off-by: Andrey Drobyshev <andrey.drobyshev@virtuozzo.com>
----
- .../qemu-iotests/tests/graph-changes-while-io | 101 ++++++++++++++++--
- .../tests/graph-changes-while-io.out          |   4 +-
- 2 files changed, 96 insertions(+), 9 deletions(-)
-
-diff --git a/tests/qemu-iotests/tests/graph-changes-while-io 
-b/tests/qemu-iotests/tests/graph-changes-while-io
-index 194fda500e..e30f823da4 100755
---- a/tests/qemu-iotests/tests/graph-changes-while-io
-+++ b/tests/qemu-iotests/tests/graph-changes-while-io
-@@ -27,6 +27,8 @@ from iotests import imgfmt, qemu_img, qemu_img_create, 
-qemu_io, \
- 
- 
- top = os.path.join(iotests.test_dir, 'top.img')
-+snap1 = os.path.join(iotests.test_dir, 'snap1.img')
-+snap2 = os.path.join(iotests.test_dir, 'snap2.img')
- nbd_sock = os.path.join(iotests.sock_dir, 'nbd.sock')
- 
- 
-@@ -58,6 +60,15 @@ class TestGraphChangesWhileIO(QMPTestCase):
-     def tearDown(self) -> None:
-         self.qsd.stop()
- 
-+    def _wait_for_blockjob(self, status) -> None:
-+        done = False
-+        while not done:
-+            for event in self.qsd.get_qmp().get_events(wait=10.0):
-+                if event['event'] != 'JOB_STATUS_CHANGE':
-+                    continue
-+                if event['data']['status'] == status:
-+                    done = True
-+
-     def test_blockdev_add_while_io(self) -> None:
-         # Run qemu-img bench in the background
-         bench_thr = Thread(target=do_qemu_img_bench)
-@@ -116,13 +127,89 @@ class TestGraphChangesWhileIO(QMPTestCase):
-                 'device': 'job0',
-             })
- 
--            cancelled = False
--            while not cancelled:
--                for event in self.qsd.get_qmp().get_events(wait=10.0):
--                    if event['event'] != 'JOB_STATUS_CHANGE':
--                        continue
--                    if event['data']['status'] == 'null':
--                        cancelled = True
-+            self._wait_for_blockjob('null')
-+
-+        bench_thr.join()
-+
-+    def test_remove_lower_snapshot_while_io(self) -> None:
-+        # Run qemu-img bench in the background
-+        bench_thr = Thread(target=do_qemu_img_bench, args=(100000, ))
-+        bench_thr.start()
-+
-+        # While I/O is performed on 'node0' node, consequently add 2 snapshots
-+        # on top of it, then remove (commit) them starting from lower one.
-+        while bench_thr.is_alive():
-+            # Recreate snapshot images on every iteration
-+            qemu_img_create('-f', imgfmt, snap1, '1G')
-+            qemu_img_create('-f', imgfmt, snap2, '1G')
-+
-+            self.qsd.cmd('blockdev-add', {
-+                'driver': imgfmt,
-+                'node-name': 'snap1',
-+                'file': {
-+                    'driver': 'file',
-+                    'filename': snap1
-+                }
-+            })
-+
-+            self.qsd.cmd('blockdev-snapshot', {
-+                'node': 'node0',
-+                'overlay': 'snap1',
-+            })
-+
-+            self.qsd.cmd('blockdev-add', {
-+                'driver': imgfmt,
-+                'node-name': 'snap2',
-+                'file': {
-+                    'driver': 'file',
-+                    'filename': snap2
-+                }
-+            })
-+
-+            self.qsd.cmd('blockdev-snapshot', {
-+                'node': 'snap1',
-+                'overlay': 'snap2',
-+            })
-+
-+            self.qsd.cmd('block-commit', {
-+                'job-id': 'commit-snap1',
-+                'device': 'snap2',
-+                'top-node': 'snap1',
-+                'base-node': 'node0',
-+                'auto-finalize': True,
-+                'auto-dismiss': False,
-+            })
-+
-+            self._wait_for_blockjob('concluded')
-+            self.qsd.cmd('job-dismiss', {
-+                'id': 'commit-snap1',
-+            })
-+
-+            self.qsd.cmd('block-commit', {
-+                'job-id': 'commit-snap2',
-+                'device': 'snap2',
-+                'top-node': 'snap2',
-+                'base-node': 'node0',
-+                'auto-finalize': True,
-+                'auto-dismiss': False,
-+            })
-+
-+            self._wait_for_blockjob('ready')
-+            self.qsd.cmd('job-complete', {
-+                'id': 'commit-snap2',
-+            })
-+
-+            self._wait_for_blockjob('concluded')
-+            self.qsd.cmd('job-dismiss', {
-+                'id': 'commit-snap2',
-+            })
-+
-+            self.qsd.cmd('blockdev-del', {
-+                'node-name': 'snap1'
-+            })
-+            self.qsd.cmd('blockdev-del', {
-+                'node-name': 'snap2'
-+            })
- 
-         bench_thr.join()
- 
-diff --git a/tests/qemu-iotests/tests/graph-changes-while-io.out 
-b/tests/qemu-iotests/tests/graph-changes-while-io.out
-index fbc63e62f8..8d7e996700 100644
---- a/tests/qemu-iotests/tests/graph-changes-while-io.out
-+++ b/tests/qemu-iotests/tests/graph-changes-while-io.out
-@@ -1,5 +1,5 @@
--..
-+...
- ----------------------------------------------------------------------
--Ran 2 tests
-+Ran 3 tests
- 
- OK
--- 
-2.43.5
-
-Am 24.04.25 um 19:32 schrieb Andrey Drobyshev:
->
-So it looks like main thread is processing job-dismiss request and is
->
-holding write lock taken in block_job_remove_all_bdrv() (frame #20
->
-above).  At the same time iothread spawns a coroutine which performs IO
->
-request.  Before the coroutine is spawned, blk_aio_prwv() increases
->
-'in_flight' counter for Blk.  Then blk_co_do_preadv_part() (frame #5) is
->
-trying to acquire the read lock.  But main thread isn't releasing the
->
-lock as blk_root_drained_poll() returns true since blk->in_flight > 0.
->
-Here's the deadlock.
-And for the IO test you provided, it's client->nb_requests that behaves
-similarly to blk->in_flight here.
-
-The issue also reproduces easily when issuing the following QMP command
-in a loop while doing IO on a device:
-
->
-void qmp_block_locked_drain(const char *node_name, Error **errp)
->
-{
->
-BlockDriverState *bs;
->
->
-bs = bdrv_find_node(node_name);
->
-if (!bs) {
->
-error_setg(errp, "node not found");
->
-return;
->
-}
->
->
-bdrv_graph_wrlock();
->
-bdrv_drained_begin(bs);
->
-bdrv_drained_end(bs);
->
-bdrv_graph_wrunlock();
->
-}
-It seems like either it would be necessary to require:
-1. not draining inside an exclusively locked section
-or
-2. making sure that variables used by drained_poll routines are only set
-while holding the reader lock
-?
-
-Those seem to require rather involved changes, so a third option might
-be to make draining inside an exclusively locked section possible, by
-embedding such locked sections in a drained section:
-
->
-diff --git a/blockjob.c b/blockjob.c
->
-index 32007f31a9..9b2f3b3ea9 100644
->
---- a/blockjob.c
->
-+++ b/blockjob.c
->
-@@ -198,6 +198,7 @@ void block_job_remove_all_bdrv(BlockJob *job)
->
-* one to make sure that such a concurrent access does not attempt
->
-* to process an already freed BdrvChild.
->
-*/
->
-+    bdrv_drain_all_begin();
->
-bdrv_graph_wrlock();
->
-while (job->nodes) {
->
-GSList *l = job->nodes;
->
-@@ -211,6 +212,7 @@ void block_job_remove_all_bdrv(BlockJob *job)
->
-g_slist_free_1(l);
->
-}
->
-bdrv_graph_wrunlock();
->
-+    bdrv_drain_all_end();
->
-}
->
->
-bool block_job_has_bdrv(BlockJob *job, BlockDriverState *bs)
-This seems to fix the issue at hand. I can send a patch if this is
-considered an acceptable approach.
-
-Best Regards,
-Fiona
-
-On 4/30/25 11:47 AM, Fiona Ebner wrote:
->
-Am 24.04.25 um 19:32 schrieb Andrey Drobyshev:
->
-> So it looks like main thread is processing job-dismiss request and is
->
-> holding write lock taken in block_job_remove_all_bdrv() (frame #20
->
-> above).  At the same time iothread spawns a coroutine which performs IO
->
-> request.  Before the coroutine is spawned, blk_aio_prwv() increases
->
-> 'in_flight' counter for Blk.  Then blk_co_do_preadv_part() (frame #5) is
->
-> trying to acquire the read lock.  But main thread isn't releasing the
->
-> lock as blk_root_drained_poll() returns true since blk->in_flight > 0.
->
-> Here's the deadlock.
->
->
-And for the IO test you provided, it's client->nb_requests that behaves
->
-similarly to blk->in_flight here.
->
->
-The issue also reproduces easily when issuing the following QMP command
->
-in a loop while doing IO on a device:
->
->
-> void qmp_block_locked_drain(const char *node_name, Error **errp)
->
-> {
->
->     BlockDriverState *bs;
->
->
->
->     bs = bdrv_find_node(node_name);
->
->     if (!bs) {
->
->         error_setg(errp, "node not found");
->
->         return;
->
->     }
->
->
->
->     bdrv_graph_wrlock();
->
->     bdrv_drained_begin(bs);
->
->     bdrv_drained_end(bs);
->
->     bdrv_graph_wrunlock();
->
-> }
->
->
-It seems like either it would be necessary to require:
->
-1. not draining inside an exclusively locked section
->
-or
->
-2. making sure that variables used by drained_poll routines are only set
->
-while holding the reader lock
->
-?
->
->
-Those seem to require rather involved changes, so a third option might
->
-be to make draining inside an exclusively locked section possible, by
->
-embedding such locked sections in a drained section:
->
->
-> diff --git a/blockjob.c b/blockjob.c
->
-> index 32007f31a9..9b2f3b3ea9 100644
->
-> --- a/blockjob.c
->
-> +++ b/blockjob.c
->
-> @@ -198,6 +198,7 @@ void block_job_remove_all_bdrv(BlockJob *job)
->
->       * one to make sure that such a concurrent access does not attempt
->
->       * to process an already freed BdrvChild.
->
->       */
->
-> +    bdrv_drain_all_begin();
->
->      bdrv_graph_wrlock();
->
->      while (job->nodes) {
->
->          GSList *l = job->nodes;
->
-> @@ -211,6 +212,7 @@ void block_job_remove_all_bdrv(BlockJob *job)
->
->          g_slist_free_1(l);
->
->      }
->
->      bdrv_graph_wrunlock();
->
-> +    bdrv_drain_all_end();
->
->  }
->
->
->
->  bool block_job_has_bdrv(BlockJob *job, BlockDriverState *bs)
->
->
-This seems to fix the issue at hand. I can send a patch if this is
->
-considered an acceptable approach.
->
->
-Best Regards,
->
-Fiona
->
-Hello Fiona,
-
-Thanks for looking into it.  I've tried your 3rd option above and can
-confirm it does fix the deadlock, at least I can't reproduce it.  Other
-iotests also don't seem to be breaking.  So I personally am fine with
-that patch.  Would be nice to hear a word from the maintainers though on
-whether there're any caveats with such approach.
-
-Andrey
-
-On Wed, Apr 30, 2025 at 10:11 AM Andrey Drobyshev
-<andrey.drobyshev@virtuozzo.com> wrote:
->
->
-On 4/30/25 11:47 AM, Fiona Ebner wrote:
->
-> Am 24.04.25 um 19:32 schrieb Andrey Drobyshev:
->
->> So it looks like main thread is processing job-dismiss request and is
->
->> holding write lock taken in block_job_remove_all_bdrv() (frame #20
->
->> above).  At the same time iothread spawns a coroutine which performs IO
->
->> request.  Before the coroutine is spawned, blk_aio_prwv() increases
->
->> 'in_flight' counter for Blk.  Then blk_co_do_preadv_part() (frame #5) is
->
->> trying to acquire the read lock.  But main thread isn't releasing the
->
->> lock as blk_root_drained_poll() returns true since blk->in_flight > 0.
->
->> Here's the deadlock.
->
->
->
-> And for the IO test you provided, it's client->nb_requests that behaves
->
-> similarly to blk->in_flight here.
->
->
->
-> The issue also reproduces easily when issuing the following QMP command
->
-> in a loop while doing IO on a device:
->
->
->
->> void qmp_block_locked_drain(const char *node_name, Error **errp)
->
->> {
->
->>     BlockDriverState *bs;
->
->>
->
->>     bs = bdrv_find_node(node_name);
->
->>     if (!bs) {
->
->>         error_setg(errp, "node not found");
->
->>         return;
->
->>     }
->
->>
->
->>     bdrv_graph_wrlock();
->
->>     bdrv_drained_begin(bs);
->
->>     bdrv_drained_end(bs);
->
->>     bdrv_graph_wrunlock();
->
->> }
->
->
->
-> It seems like either it would be necessary to require:
->
-> 1. not draining inside an exclusively locked section
->
-> or
->
-> 2. making sure that variables used by drained_poll routines are only set
->
-> while holding the reader lock
->
-> ?
->
->
->
-> Those seem to require rather involved changes, so a third option might
->
-> be to make draining inside an exclusively locked section possible, by
->
-> embedding such locked sections in a drained section:
->
->
->
->> diff --git a/blockjob.c b/blockjob.c
->
->> index 32007f31a9..9b2f3b3ea9 100644
->
->> --- a/blockjob.c
->
->> +++ b/blockjob.c
->
->> @@ -198,6 +198,7 @@ void block_job_remove_all_bdrv(BlockJob *job)
->
->>       * one to make sure that such a concurrent access does not attempt
->
->>       * to process an already freed BdrvChild.
->
->>       */
->
->> +    bdrv_drain_all_begin();
->
->>      bdrv_graph_wrlock();
->
->>      while (job->nodes) {
->
->>          GSList *l = job->nodes;
->
->> @@ -211,6 +212,7 @@ void block_job_remove_all_bdrv(BlockJob *job)
->
->>          g_slist_free_1(l);
->
->>      }
->
->>      bdrv_graph_wrunlock();
->
->> +    bdrv_drain_all_end();
->
->>  }
->
->>
->
->>  bool block_job_has_bdrv(BlockJob *job, BlockDriverState *bs)
->
->
->
-> This seems to fix the issue at hand. I can send a patch if this is
->
-> considered an acceptable approach.
-Kevin is aware of this thread but it's a public holiday tomorrow so it
-may be a little longer.
-
-Stefan
-
-Am 24.04.2025 um 19:32 hat Andrey Drobyshev geschrieben:
->
-Hi all,
->
->
-There's a bug in block layer which leads to block graph deadlock.
->
-Notably, it takes place when blockdev IO is processed within a separate
->
-iothread.
->
->
-This was initially caught by our tests, and I was able to reduce it to a
->
-relatively simple reproducer.  Such deadlocks are probably supposed to
->
-be covered in iotests/graph-changes-while-io, but this deadlock isn't.
->
->
-Basically what the reproducer does is launches QEMU with a drive having
->
-'iothread' option set, creates a chain of 2 snapshots, launches
->
-block-commit job for a snapshot and then dismisses the job, starting
->
-from the lower snapshot.  If the guest is issuing IO at the same time,
->
-there's a race in acquiring block graph lock and a potential deadlock.
->
->
-Here's how it can be reproduced:
->
->
-1. Run QEMU:
->
-> SRCDIR=/path/to/srcdir
->
->
->
->
->
->
->
->
->
-> $SRCDIR/build/qemu-system-x86_64 -enable-kvm \
->
->
->
->   -machine q35 -cpu Nehalem \
->
->
->
->   -name guest=alma8-vm,debug-threads=on \
->
->
->
->   -m 2g -smp 2 \
->
->
->
->   -nographic -nodefaults \
->
->
->
->   -qmp unix:/var/run/alma8-qmp.sock,server=on,wait=off \
->
->
->
->   -serial unix:/var/run/alma8-serial.sock,server=on,wait=off \
->
->
->
->   -object iothread,id=iothread0 \
->
->
->
->   -blockdev
->
-> node-name=disk,driver=qcow2,file.driver=file,file.filename=/path/to/img/alma8.qcow2
->
->  \
->
->   -device virtio-blk-pci,drive=disk,iothread=iothread0
->
->
-2. Launch IO (random reads) from within the guest:
->
-> nc -U /var/run/alma8-serial.sock
->
-> ...
->
-> [root@alma8-vm ~]# fio --name=randread --ioengine=libaio --direct=1 --bs=4k
->
-> --size=1G --numjobs=1 --time_based=1 --runtime=300 --group_reporting
->
-> --rw=randread --iodepth=1 --filename=/testfile
->
->
-3. Run snapshots creation & removal of lower snapshot operation in a
->
-loop (script attached):
->
-> while /bin/true ; do ./remove_lower_snap.sh ; done
->
->
-And then it occasionally hangs.
->
->
-Note: I've tried bisecting this, and looks like deadlock occurs starting
->
-from the following commit:
->
->
-(BAD)  5bdbaebcce virtio: Re-enable notifications after drain
->
-(GOOD) c42c3833e0 virtio-scsi: Attach event vq notifier with no_poll
->
->
-On the latest v10.0.0 it does hang as well.
->
->
->
-Here's backtrace of the main thread:
->
->
-> #0  0x00007fc547d427ce in __ppoll (fds=0x557eb79657b0, nfds=1,
->
-> timeout=<optimized out>, sigmask=0x0) at
->
-> ../sysdeps/unix/sysv/linux/ppoll.c:43
->
-> #1  0x0000557eb47d955c in qemu_poll_ns (fds=0x557eb79657b0, nfds=1,
->
-> timeout=-1) at ../util/qemu-timer.c:329
->
-> #2  0x0000557eb47b2204 in fdmon_poll_wait (ctx=0x557eb76c5f20,
->
-> ready_list=0x7ffd94b4edd8, timeout=-1) at ../util/fdmon-poll.c:79
->
-> #3  0x0000557eb47b1c45 in aio_poll (ctx=0x557eb76c5f20, blocking=true) at
->
-> ../util/aio-posix.c:730
->
-> #4  0x0000557eb4621edd in bdrv_do_drained_begin (bs=0x557eb795e950,
->
-> parent=0x0, poll=true) at ../block/io.c:378
->
-> #5  0x0000557eb4621f7b in bdrv_drained_begin (bs=0x557eb795e950) at
->
-> ../block/io.c:391
->
-> #6  0x0000557eb45ec125 in bdrv_change_aio_context (bs=0x557eb795e950,
->
-> ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...}, tran=0x557eb7a87160,
->
-> errp=0x0)
->
->     at ../block.c:7682
->
-> #7  0x0000557eb45ebf2b in bdrv_child_change_aio_context (c=0x557eb7964250,
->
-> ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...}, tran=0x557eb7a87160,
->
-> errp=0x0)
->
->     at ../block.c:7608
->
-> #8  0x0000557eb45ec0c4 in bdrv_change_aio_context (bs=0x557eb79575e0,
->
-> ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...}, tran=0x557eb7a87160,
->
-> errp=0x0)
->
->     at ../block.c:7668
->
-> #9  0x0000557eb45ebf2b in bdrv_child_change_aio_context (c=0x557eb7e59110,
->
-> ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...}, tran=0x557eb7a87160,
->
-> errp=0x0)
->
->     at ../block.c:7608
->
-> #10 0x0000557eb45ec0c4 in bdrv_change_aio_context (bs=0x557eb7e51960,
->
-> ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...}, tran=0x557eb7a87160,
->
-> errp=0x0)
->
->     at ../block.c:7668
->
-> #11 0x0000557eb45ebf2b in bdrv_child_change_aio_context (c=0x557eb814ed80,
->
-> ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...}, tran=0x557eb7a87160,
->
-> errp=0x0)
->
->     at ../block.c:7608
->
-> #12 0x0000557eb45ee8e4 in child_job_change_aio_ctx (c=0x557eb7c9d3f0,
->
-> ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...}, tran=0x557eb7a87160,
->
-> errp=0x0)
->
->     at ../blockjob.c:157
->
-> #13 0x0000557eb45ebe2d in bdrv_parent_change_aio_context (c=0x557eb7c9d3f0,
->
-> ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...}, tran=0x557eb7a87160,
->
-> errp=0x0)
->
->     at ../block.c:7592
->
-> #14 0x0000557eb45ec06b in bdrv_change_aio_context (bs=0x557eb7d74310,
->
-> ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...}, tran=0x557eb7a87160,
->
-> errp=0x0)
->
->     at ../block.c:7661
->
-> #15 0x0000557eb45dcd7e in bdrv_child_cb_change_aio_ctx
->
->     (child=0x557eb8565af0, ctx=0x557eb76c5f20, visited=0x557eb7e06b60 =
->
-> {...}, tran=0x557eb7a87160, errp=0x0) at ../block.c:1234
->
-> #16 0x0000557eb45ebe2d in bdrv_parent_change_aio_context (c=0x557eb8565af0,
->
-> ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...}, tran=0x557eb7a87160,
->
-> errp=0x0)
->
->     at ../block.c:7592
->
-> #17 0x0000557eb45ec06b in bdrv_change_aio_context (bs=0x557eb79575e0,
->
-> ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...}, tran=0x557eb7a87160,
->
-> errp=0x0)
->
->     at ../block.c:7661
->
-> #18 0x0000557eb45ec1f3 in bdrv_try_change_aio_context (bs=0x557eb79575e0,
->
-> ctx=0x557eb76c5f20, ignore_child=0x0, errp=0x0) at ../block.c:7715
->
-> #19 0x0000557eb45e1b15 in bdrv_root_unref_child (child=0x557eb7966f30) at
->
-> ../block.c:3317
->
-> #20 0x0000557eb45eeaa8 in block_job_remove_all_bdrv (job=0x557eb7952800) at
->
-> ../blockjob.c:209
->
-> #21 0x0000557eb45ee641 in block_job_free (job=0x557eb7952800) at
->
-> ../blockjob.c:82
->
-> #22 0x0000557eb45f17af in job_unref_locked (job=0x557eb7952800) at
->
-> ../job.c:474
->
-> #23 0x0000557eb45f257d in job_do_dismiss_locked (job=0x557eb7952800) at
->
-> ../job.c:771
->
-> #24 0x0000557eb45f25fe in job_dismiss_locked (jobptr=0x7ffd94b4f400,
->
-> errp=0x7ffd94b4f488) at ../job.c:783
->
-> --Type <RET> for more, q to quit, c to continue without paging--
->
-> #25 0x0000557eb45d8e84 in qmp_job_dismiss (id=0x557eb7aa42b0
->
-> "commit-snap1", errp=0x7ffd94b4f488) at ../job-qmp.c:138
->
-> #26 0x0000557eb472f6a3 in qmp_marshal_job_dismiss (args=0x7fc52c00a3b0,
->
-> ret=0x7fc53c880da8, errp=0x7fc53c880da0) at qapi/qapi-commands-job.c:221
->
-> #27 0x0000557eb47a35f3 in do_qmp_dispatch_bh (opaque=0x7fc53c880e40) at
->
-> ../qapi/qmp-dispatch.c:128
->
-> #28 0x0000557eb47d1cd2 in aio_bh_call (bh=0x557eb79568f0) at
->
-> ../util/async.c:172
->
-> #29 0x0000557eb47d1df5 in aio_bh_poll (ctx=0x557eb76c0200) at
->
-> ../util/async.c:219
->
-> #30 0x0000557eb47b12f3 in aio_dispatch (ctx=0x557eb76c0200) at
->
-> ../util/aio-posix.c:436
->
-> #31 0x0000557eb47d2266 in aio_ctx_dispatch (source=0x557eb76c0200,
->
-> callback=0x0, user_data=0x0) at ../util/async.c:361
->
-> #32 0x00007fc549232f4f in g_main_dispatch (context=0x557eb76c6430) at
->
-> ../glib/gmain.c:3364
->
-> #33 g_main_context_dispatch (context=0x557eb76c6430) at ../glib/gmain.c:4079
->
-> #34 0x0000557eb47d3ab1 in glib_pollfds_poll () at ../util/main-loop.c:287
->
-> #35 0x0000557eb47d3b38 in os_host_main_loop_wait (timeout=0) at
->
-> ../util/main-loop.c:310
->
-> #36 0x0000557eb47d3c58 in main_loop_wait (nonblocking=0) at
->
-> ../util/main-loop.c:589
->
-> #37 0x0000557eb4218b01 in qemu_main_loop () at ../system/runstate.c:835
->
-> #38 0x0000557eb46df166 in qemu_default_main (opaque=0x0) at
->
-> ../system/main.c:50
->
-> #39 0x0000557eb46df215 in main (argc=24, argv=0x7ffd94b4f8d8) at
->
-> ../system/main.c:80
->
->
->
-And here's coroutine trying to acquire read lock:
->
->
-> (gdb) qemu coroutine reader_queue->entries.sqh_first
->
-> #0  0x0000557eb47d7068 in qemu_coroutine_switch (from_=0x557eb7aa48b0,
->
-> to_=0x7fc537fff508, action=COROUTINE_YIELD) at
->
-> ../util/coroutine-ucontext.c:321
->
-> #1  0x0000557eb47d4d4a in qemu_coroutine_yield () at
->
-> ../util/qemu-coroutine.c:339
->
-> #2  0x0000557eb47d56c8 in qemu_co_queue_wait_impl (queue=0x557eb59954c0
->
-> <reader_queue>, lock=0x7fc53c57de50, flags=0) at
->
-> ../util/qemu-coroutine-lock.c:60
->
-> #3  0x0000557eb461fea7 in bdrv_graph_co_rdlock () at
->
-> ../block/graph-lock.c:231
->
-> #4  0x0000557eb460c81a in graph_lockable_auto_lock (x=0x7fc53c57dee3) at
->
-> /home/root/src/qemu/master/include/block/graph-lock.h:213
->
-> #5  0x0000557eb460fa41 in blk_co_do_preadv_part
->
->     (blk=0x557eb84c0810, offset=6890553344, bytes=4096,
->
-> qiov=0x7fc530006988, qiov_offset=0, flags=BDRV_REQ_REGISTERED_BUF) at
->
-> ../block/block-backend.c:1339
->
-> #6  0x0000557eb46104d7 in blk_aio_read_entry (opaque=0x7fc530003240) at
->
-> ../block/block-backend.c:1619
->
-> #7  0x0000557eb47d6c40 in coroutine_trampoline (i0=-1213577040, i1=21886)
->
-> at ../util/coroutine-ucontext.c:175
->
-> #8  0x00007fc547c2a360 in __start_context () at
->
-> ../sysdeps/unix/sysv/linux/x86_64/__start_context.S:91
->
-> #9  0x00007ffd94b4ea40 in  ()
->
-> #10 0x0000000000000000 in  ()
->
->
->
-So it looks like main thread is processing job-dismiss request and is
->
-holding write lock taken in block_job_remove_all_bdrv() (frame #20
->
-above).  At the same time iothread spawns a coroutine which performs IO
->
-request.  Before the coroutine is spawned, blk_aio_prwv() increases
->
-'in_flight' counter for Blk.  Then blk_co_do_preadv_part() (frame #5) is
->
-trying to acquire the read lock.  But main thread isn't releasing the
->
-lock as blk_root_drained_poll() returns true since blk->in_flight > 0.
->
-Here's the deadlock.
->
->
-Any comments and suggestions on the subject are welcomed.  Thanks!
-I think this is what the blk_wait_while_drained() call was supposed to
-address in blk_co_do_preadv_part(). However, with the use of multiple
-I/O threads, this is racy.
-
-Do you think that in your case we hit the small race window between the
-checks in blk_wait_while_drained() and GRAPH_RDLOCK_GUARD()? Or is there
-another reason why blk_wait_while_drained() didn't do its job?
-
-Kevin
-
-On 5/2/25 19:34, Kevin Wolf wrote:
-Am 24.04.2025 um 19:32 hat Andrey Drobyshev geschrieben:
-Hi all,
-
-There's a bug in block layer which leads to block graph deadlock.
-Notably, it takes place when blockdev IO is processed within a separate
-iothread.
-
-This was initially caught by our tests, and I was able to reduce it to a
-relatively simple reproducer.  Such deadlocks are probably supposed to
-be covered in iotests/graph-changes-while-io, but this deadlock isn't.
-
-Basically what the reproducer does is launches QEMU with a drive having
-'iothread' option set, creates a chain of 2 snapshots, launches
-block-commit job for a snapshot and then dismisses the job, starting
-from the lower snapshot.  If the guest is issuing IO at the same time,
-there's a race in acquiring block graph lock and a potential deadlock.
-
-Here's how it can be reproduced:
-
-1. Run QEMU:
-SRCDIR=/path/to/srcdir
-$SRCDIR/build/qemu-system-x86_64 -enable-kvm \
--machine q35 -cpu Nehalem \
-   -name guest=alma8-vm,debug-threads=on \
-   -m 2g -smp 2 \
-   -nographic -nodefaults \
-   -qmp unix:/var/run/alma8-qmp.sock,server=on,wait=off \
-   -serial unix:/var/run/alma8-serial.sock,server=on,wait=off \
-   -object iothread,id=iothread0 \
-   -blockdev 
-node-name=disk,driver=qcow2,file.driver=file,file.filename=/path/to/img/alma8.qcow2
- \
-   -device virtio-blk-pci,drive=disk,iothread=iothread0
-2. Launch IO (random reads) from within the guest:
-nc -U /var/run/alma8-serial.sock
-...
-[root@alma8-vm ~]# fio --name=randread --ioengine=libaio --direct=1 --bs=4k 
---size=1G --numjobs=1 --time_based=1 --runtime=300 --group_reporting 
---rw=randread --iodepth=1 --filename=/testfile
-3. Run snapshots creation & removal of lower snapshot operation in a
-loop (script attached):
-while /bin/true ; do ./remove_lower_snap.sh ; done
-And then it occasionally hangs.
-
-Note: I've tried bisecting this, and looks like deadlock occurs starting
-from the following commit:
-
-(BAD)  5bdbaebcce virtio: Re-enable notifications after drain
-(GOOD) c42c3833e0 virtio-scsi: Attach event vq notifier with no_poll
-
-On the latest v10.0.0 it does hang as well.
-
-
-Here's backtrace of the main thread:
-#0  0x00007fc547d427ce in __ppoll (fds=0x557eb79657b0, nfds=1, timeout=<optimized 
-out>, sigmask=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:43
-#1  0x0000557eb47d955c in qemu_poll_ns (fds=0x557eb79657b0, nfds=1, timeout=-1) 
-at ../util/qemu-timer.c:329
-#2  0x0000557eb47b2204 in fdmon_poll_wait (ctx=0x557eb76c5f20, 
-ready_list=0x7ffd94b4edd8, timeout=-1) at ../util/fdmon-poll.c:79
-#3  0x0000557eb47b1c45 in aio_poll (ctx=0x557eb76c5f20, blocking=true) at 
-../util/aio-posix.c:730
-#4  0x0000557eb4621edd in bdrv_do_drained_begin (bs=0x557eb795e950, parent=0x0, 
-poll=true) at ../block/io.c:378
-#5  0x0000557eb4621f7b in bdrv_drained_begin (bs=0x557eb795e950) at 
-../block/io.c:391
-#6  0x0000557eb45ec125 in bdrv_change_aio_context (bs=0x557eb795e950, 
-ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...}, tran=0x557eb7a87160, 
-errp=0x0)
-     at ../block.c:7682
-#7  0x0000557eb45ebf2b in bdrv_child_change_aio_context (c=0x557eb7964250, 
-ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...}, tran=0x557eb7a87160, 
-errp=0x0)
-     at ../block.c:7608
-#8  0x0000557eb45ec0c4 in bdrv_change_aio_context (bs=0x557eb79575e0, 
-ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...}, tran=0x557eb7a87160, 
-errp=0x0)
-     at ../block.c:7668
-#9  0x0000557eb45ebf2b in bdrv_child_change_aio_context (c=0x557eb7e59110, 
-ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...}, tran=0x557eb7a87160, 
-errp=0x0)
-     at ../block.c:7608
-#10 0x0000557eb45ec0c4 in bdrv_change_aio_context (bs=0x557eb7e51960, 
-ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...}, tran=0x557eb7a87160, 
-errp=0x0)
-     at ../block.c:7668
-#11 0x0000557eb45ebf2b in bdrv_child_change_aio_context (c=0x557eb814ed80, 
-ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...}, tran=0x557eb7a87160, 
-errp=0x0)
-     at ../block.c:7608
-#12 0x0000557eb45ee8e4 in child_job_change_aio_ctx (c=0x557eb7c9d3f0, 
-ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...}, tran=0x557eb7a87160, 
-errp=0x0)
-     at ../blockjob.c:157
-#13 0x0000557eb45ebe2d in bdrv_parent_change_aio_context (c=0x557eb7c9d3f0, 
-ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...}, tran=0x557eb7a87160, 
-errp=0x0)
-     at ../block.c:7592
-#14 0x0000557eb45ec06b in bdrv_change_aio_context (bs=0x557eb7d74310, 
-ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...}, tran=0x557eb7a87160, 
-errp=0x0)
-     at ../block.c:7661
-#15 0x0000557eb45dcd7e in bdrv_child_cb_change_aio_ctx
-     (child=0x557eb8565af0, ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...}, 
-tran=0x557eb7a87160, errp=0x0) at ../block.c:1234
-#16 0x0000557eb45ebe2d in bdrv_parent_change_aio_context (c=0x557eb8565af0, 
-ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...}, tran=0x557eb7a87160, 
-errp=0x0)
-     at ../block.c:7592
-#17 0x0000557eb45ec06b in bdrv_change_aio_context (bs=0x557eb79575e0, 
-ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...}, tran=0x557eb7a87160, 
-errp=0x0)
-     at ../block.c:7661
-#18 0x0000557eb45ec1f3 in bdrv_try_change_aio_context (bs=0x557eb79575e0, 
-ctx=0x557eb76c5f20, ignore_child=0x0, errp=0x0) at ../block.c:7715
-#19 0x0000557eb45e1b15 in bdrv_root_unref_child (child=0x557eb7966f30) at 
-../block.c:3317
-#20 0x0000557eb45eeaa8 in block_job_remove_all_bdrv (job=0x557eb7952800) at 
-../blockjob.c:209
-#21 0x0000557eb45ee641 in block_job_free (job=0x557eb7952800) at 
-../blockjob.c:82
-#22 0x0000557eb45f17af in job_unref_locked (job=0x557eb7952800) at ../job.c:474
-#23 0x0000557eb45f257d in job_do_dismiss_locked (job=0x557eb7952800) at 
-../job.c:771
-#24 0x0000557eb45f25fe in job_dismiss_locked (jobptr=0x7ffd94b4f400, 
-errp=0x7ffd94b4f488) at ../job.c:783
---Type <RET> for more, q to quit, c to continue without paging--
-#25 0x0000557eb45d8e84 in qmp_job_dismiss (id=0x557eb7aa42b0 "commit-snap1", 
-errp=0x7ffd94b4f488) at ../job-qmp.c:138
-#26 0x0000557eb472f6a3 in qmp_marshal_job_dismiss (args=0x7fc52c00a3b0, 
-ret=0x7fc53c880da8, errp=0x7fc53c880da0) at qapi/qapi-commands-job.c:221
-#27 0x0000557eb47a35f3 in do_qmp_dispatch_bh (opaque=0x7fc53c880e40) at 
-../qapi/qmp-dispatch.c:128
-#28 0x0000557eb47d1cd2 in aio_bh_call (bh=0x557eb79568f0) at ../util/async.c:172
-#29 0x0000557eb47d1df5 in aio_bh_poll (ctx=0x557eb76c0200) at 
-../util/async.c:219
-#30 0x0000557eb47b12f3 in aio_dispatch (ctx=0x557eb76c0200) at 
-../util/aio-posix.c:436
-#31 0x0000557eb47d2266 in aio_ctx_dispatch (source=0x557eb76c0200, 
-callback=0x0, user_data=0x0) at ../util/async.c:361
-#32 0x00007fc549232f4f in g_main_dispatch (context=0x557eb76c6430) at 
-../glib/gmain.c:3364
-#33 g_main_context_dispatch (context=0x557eb76c6430) at ../glib/gmain.c:4079
-#34 0x0000557eb47d3ab1 in glib_pollfds_poll () at ../util/main-loop.c:287
-#35 0x0000557eb47d3b38 in os_host_main_loop_wait (timeout=0) at 
-../util/main-loop.c:310
-#36 0x0000557eb47d3c58 in main_loop_wait (nonblocking=0) at 
-../util/main-loop.c:589
-#37 0x0000557eb4218b01 in qemu_main_loop () at ../system/runstate.c:835
-#38 0x0000557eb46df166 in qemu_default_main (opaque=0x0) at ../system/main.c:50
-#39 0x0000557eb46df215 in main (argc=24, argv=0x7ffd94b4f8d8) at 
-../system/main.c:80
-And here's coroutine trying to acquire read lock:
-(gdb) qemu coroutine reader_queue->entries.sqh_first
-#0  0x0000557eb47d7068 in qemu_coroutine_switch (from_=0x557eb7aa48b0, 
-to_=0x7fc537fff508, action=COROUTINE_YIELD) at ../util/coroutine-ucontext.c:321
-#1  0x0000557eb47d4d4a in qemu_coroutine_yield () at 
-../util/qemu-coroutine.c:339
-#2  0x0000557eb47d56c8 in qemu_co_queue_wait_impl (queue=0x557eb59954c0 
-<reader_queue>, lock=0x7fc53c57de50, flags=0) at 
-../util/qemu-coroutine-lock.c:60
-#3  0x0000557eb461fea7 in bdrv_graph_co_rdlock () at ../block/graph-lock.c:231
-#4  0x0000557eb460c81a in graph_lockable_auto_lock (x=0x7fc53c57dee3) at 
-/home/root/src/qemu/master/include/block/graph-lock.h:213
-#5  0x0000557eb460fa41 in blk_co_do_preadv_part
-     (blk=0x557eb84c0810, offset=6890553344, bytes=4096, qiov=0x7fc530006988, 
-qiov_offset=0, flags=BDRV_REQ_REGISTERED_BUF) at ../block/block-backend.c:1339
-#6  0x0000557eb46104d7 in blk_aio_read_entry (opaque=0x7fc530003240) at 
-../block/block-backend.c:1619
-#7  0x0000557eb47d6c40 in coroutine_trampoline (i0=-1213577040, i1=21886) at 
-../util/coroutine-ucontext.c:175
-#8  0x00007fc547c2a360 in __start_context () at 
-../sysdeps/unix/sysv/linux/x86_64/__start_context.S:91
-#9  0x00007ffd94b4ea40 in  ()
-#10 0x0000000000000000 in  ()
-So it looks like main thread is processing job-dismiss request and is
-holding write lock taken in block_job_remove_all_bdrv() (frame #20
-above).  At the same time iothread spawns a coroutine which performs IO
-request.  Before the coroutine is spawned, blk_aio_prwv() increases
-'in_flight' counter for Blk.  Then blk_co_do_preadv_part() (frame #5) is
-trying to acquire the read lock.  But main thread isn't releasing the
-lock as blk_root_drained_poll() returns true since blk->in_flight > 0.
-Here's the deadlock.
-
-Any comments and suggestions on the subject are welcomed.  Thanks!
-I think this is what the blk_wait_while_drained() call was supposed to
-address in blk_co_do_preadv_part(). However, with the use of multiple
-I/O threads, this is racy.
-
-Do you think that in your case we hit the small race window between the
-checks in blk_wait_while_drained() and GRAPH_RDLOCK_GUARD()? Or is there
-another reason why blk_wait_while_drained() didn't do its job?
-
-Kevin
-At my opinion there is very big race window. Main thread has
-eaten graph write lock. After that another coroutine is stalled
-within GRAPH_RDLOCK_GUARD() as there is no drain at the moment and only
-after that main thread has started drain. That is why Fiona's idea is
-looking working. Though this would mean that normally we should always
-do that at the moment when we acquire write lock. May be even inside
-this function. Den
-
-Am 02.05.2025 um 19:52 hat Denis V. Lunev geschrieben:
->
-On 5/2/25 19:34, Kevin Wolf wrote:
->
-> Am 24.04.2025 um 19:32 hat Andrey Drobyshev geschrieben:
->
-> > Hi all,
->
-> >
->
-> > There's a bug in block layer which leads to block graph deadlock.
->
-> > Notably, it takes place when blockdev IO is processed within a separate
->
-> > iothread.
->
-> >
->
-> > This was initially caught by our tests, and I was able to reduce it to a
->
-> > relatively simple reproducer.  Such deadlocks are probably supposed to
->
-> > be covered in iotests/graph-changes-while-io, but this deadlock isn't.
->
-> >
->
-> > Basically what the reproducer does is launches QEMU with a drive having
->
-> > 'iothread' option set, creates a chain of 2 snapshots, launches
->
-> > block-commit job for a snapshot and then dismisses the job, starting
->
-> > from the lower snapshot.  If the guest is issuing IO at the same time,
->
-> > there's a race in acquiring block graph lock and a potential deadlock.
->
-> >
->
-> > Here's how it can be reproduced:
->
-> >
->
-> > 1. Run QEMU:
->
-> > > SRCDIR=/path/to/srcdir
->
-> > > $SRCDIR/build/qemu-system-x86_64 -enable-kvm \
->
-> > >    -machine q35 -cpu Nehalem \
->
-> > >    -name guest=alma8-vm,debug-threads=on \
->
-> > >    -m 2g -smp 2 \
->
-> > >    -nographic -nodefaults \
->
-> > >    -qmp unix:/var/run/alma8-qmp.sock,server=on,wait=off \
->
-> > >    -serial unix:/var/run/alma8-serial.sock,server=on,wait=off \
->
-> > >    -object iothread,id=iothread0 \
->
-> > >    -blockdev
->
-> > > node-name=disk,driver=qcow2,file.driver=file,file.filename=/path/to/img/alma8.qcow2
->
-> > >  \
->
-> > >    -device virtio-blk-pci,drive=disk,iothread=iothread0
->
-> > 2. Launch IO (random reads) from within the guest:
->
-> > > nc -U /var/run/alma8-serial.sock
->
-> > > ...
->
-> > > [root@alma8-vm ~]# fio --name=randread --ioengine=libaio --direct=1
->
-> > > --bs=4k --size=1G --numjobs=1 --time_based=1 --runtime=300
->
-> > > --group_reporting --rw=randread --iodepth=1 --filename=/testfile
->
-> > 3. Run snapshots creation & removal of lower snapshot operation in a
->
-> > loop (script attached):
->
-> > > while /bin/true ; do ./remove_lower_snap.sh ; done
->
-> > And then it occasionally hangs.
->
-> >
->
-> > Note: I've tried bisecting this, and looks like deadlock occurs starting
->
-> > from the following commit:
->
-> >
->
-> > (BAD)  5bdbaebcce virtio: Re-enable notifications after drain
->
-> > (GOOD) c42c3833e0 virtio-scsi: Attach event vq notifier with no_poll
->
-> >
->
-> > On the latest v10.0.0 it does hang as well.
->
-> >
->
-> >
->
-> > Here's backtrace of the main thread:
->
-> >
->
-> > > #0  0x00007fc547d427ce in __ppoll (fds=0x557eb79657b0, nfds=1,
->
-> > > timeout=<optimized out>, sigmask=0x0) at
->
-> > > ../sysdeps/unix/sysv/linux/ppoll.c:43
->
-> > > #1  0x0000557eb47d955c in qemu_poll_ns (fds=0x557eb79657b0, nfds=1,
->
-> > > timeout=-1) at ../util/qemu-timer.c:329
->
-> > > #2  0x0000557eb47b2204 in fdmon_poll_wait (ctx=0x557eb76c5f20,
->
-> > > ready_list=0x7ffd94b4edd8, timeout=-1) at ../util/fdmon-poll.c:79
->
-> > > #3  0x0000557eb47b1c45 in aio_poll (ctx=0x557eb76c5f20, blocking=true)
->
-> > > at ../util/aio-posix.c:730
->
-> > > #4  0x0000557eb4621edd in bdrv_do_drained_begin (bs=0x557eb795e950,
->
-> > > parent=0x0, poll=true) at ../block/io.c:378
->
-> > > #5  0x0000557eb4621f7b in bdrv_drained_begin (bs=0x557eb795e950) at
->
-> > > ../block/io.c:391
->
-> > > #6  0x0000557eb45ec125 in bdrv_change_aio_context (bs=0x557eb795e950,
->
-> > > ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...},
->
-> > > tran=0x557eb7a87160, errp=0x0)
->
-> > >      at ../block.c:7682
->
-> > > #7  0x0000557eb45ebf2b in bdrv_child_change_aio_context
->
-> > > (c=0x557eb7964250, ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...},
->
-> > > tran=0x557eb7a87160, errp=0x0)
->
-> > >      at ../block.c:7608
->
-> > > #8  0x0000557eb45ec0c4 in bdrv_change_aio_context (bs=0x557eb79575e0,
->
-> > > ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...},
->
-> > > tran=0x557eb7a87160, errp=0x0)
->
-> > >      at ../block.c:7668
->
-> > > #9  0x0000557eb45ebf2b in bdrv_child_change_aio_context
->
-> > > (c=0x557eb7e59110, ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...},
->
-> > > tran=0x557eb7a87160, errp=0x0)
->
-> > >      at ../block.c:7608
->
-> > > #10 0x0000557eb45ec0c4 in bdrv_change_aio_context (bs=0x557eb7e51960,
->
-> > > ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...},
->
-> > > tran=0x557eb7a87160, errp=0x0)
->
-> > >      at ../block.c:7668
->
-> > > #11 0x0000557eb45ebf2b in bdrv_child_change_aio_context
->
-> > > (c=0x557eb814ed80, ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...},
->
-> > > tran=0x557eb7a87160, errp=0x0)
->
-> > >      at ../block.c:7608
->
-> > > #12 0x0000557eb45ee8e4 in child_job_change_aio_ctx (c=0x557eb7c9d3f0,
->
-> > > ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...},
->
-> > > tran=0x557eb7a87160, errp=0x0)
->
-> > >      at ../blockjob.c:157
->
-> > > #13 0x0000557eb45ebe2d in bdrv_parent_change_aio_context
->
-> > > (c=0x557eb7c9d3f0, ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...},
->
-> > > tran=0x557eb7a87160, errp=0x0)
->
-> > >      at ../block.c:7592
->
-> > > #14 0x0000557eb45ec06b in bdrv_change_aio_context (bs=0x557eb7d74310,
->
-> > > ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...},
->
-> > > tran=0x557eb7a87160, errp=0x0)
->
-> > >      at ../block.c:7661
->
-> > > #15 0x0000557eb45dcd7e in bdrv_child_cb_change_aio_ctx
->
-> > >      (child=0x557eb8565af0, ctx=0x557eb76c5f20, visited=0x557eb7e06b60
->
-> > > = {...}, tran=0x557eb7a87160, errp=0x0) at ../block.c:1234
->
-> > > #16 0x0000557eb45ebe2d in bdrv_parent_change_aio_context
->
-> > > (c=0x557eb8565af0, ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...},
->
-> > > tran=0x557eb7a87160, errp=0x0)
->
-> > >      at ../block.c:7592
->
-> > > #17 0x0000557eb45ec06b in bdrv_change_aio_context (bs=0x557eb79575e0,
->
-> > > ctx=0x557eb76c5f20, visited=0x557eb7e06b60 = {...},
->
-> > > tran=0x557eb7a87160, errp=0x0)
->
-> > >      at ../block.c:7661
->
-> > > #18 0x0000557eb45ec1f3 in bdrv_try_change_aio_context
->
-> > > (bs=0x557eb79575e0, ctx=0x557eb76c5f20, ignore_child=0x0, errp=0x0) at
->
-> > > ../block.c:7715
->
-> > > #19 0x0000557eb45e1b15 in bdrv_root_unref_child (child=0x557eb7966f30)
->
-> > > at ../block.c:3317
->
-> > > #20 0x0000557eb45eeaa8 in block_job_remove_all_bdrv
->
-> > > (job=0x557eb7952800) at ../blockjob.c:209
->
-> > > #21 0x0000557eb45ee641 in block_job_free (job=0x557eb7952800) at
->
-> > > ../blockjob.c:82
->
-> > > #22 0x0000557eb45f17af in job_unref_locked (job=0x557eb7952800) at
->
-> > > ../job.c:474
->
-> > > #23 0x0000557eb45f257d in job_do_dismiss_locked (job=0x557eb7952800) at
->
-> > > ../job.c:771
->
-> > > #24 0x0000557eb45f25fe in job_dismiss_locked (jobptr=0x7ffd94b4f400,
->
-> > > errp=0x7ffd94b4f488) at ../job.c:783
->
-> > > --Type <RET> for more, q to quit, c to continue without paging--
->
-> > > #25 0x0000557eb45d8e84 in qmp_job_dismiss (id=0x557eb7aa42b0
->
-> > > "commit-snap1", errp=0x7ffd94b4f488) at ../job-qmp.c:138
->
-> > > #26 0x0000557eb472f6a3 in qmp_marshal_job_dismiss (args=0x7fc52c00a3b0,
->
-> > > ret=0x7fc53c880da8, errp=0x7fc53c880da0) at qapi/qapi-commands-job.c:221
->
-> > > #27 0x0000557eb47a35f3 in do_qmp_dispatch_bh (opaque=0x7fc53c880e40) at
->
-> > > ../qapi/qmp-dispatch.c:128
->
-> > > #28 0x0000557eb47d1cd2 in aio_bh_call (bh=0x557eb79568f0) at
->
-> > > ../util/async.c:172
->
-> > > #29 0x0000557eb47d1df5 in aio_bh_poll (ctx=0x557eb76c0200) at
->
-> > > ../util/async.c:219
->
-> > > #30 0x0000557eb47b12f3 in aio_dispatch (ctx=0x557eb76c0200) at
->
-> > > ../util/aio-posix.c:436
->
-> > > #31 0x0000557eb47d2266 in aio_ctx_dispatch (source=0x557eb76c0200,
->
-> > > callback=0x0, user_data=0x0) at ../util/async.c:361
->
-> > > #32 0x00007fc549232f4f in g_main_dispatch (context=0x557eb76c6430) at
->
-> > > ../glib/gmain.c:3364
->
-> > > #33 g_main_context_dispatch (context=0x557eb76c6430) at
->
-> > > ../glib/gmain.c:4079
->
-> > > #34 0x0000557eb47d3ab1 in glib_pollfds_poll () at
->
-> > > ../util/main-loop.c:287
->
-> > > #35 0x0000557eb47d3b38 in os_host_main_loop_wait (timeout=0) at
->
-> > > ../util/main-loop.c:310
->
-> > > #36 0x0000557eb47d3c58 in main_loop_wait (nonblocking=0) at
->
-> > > ../util/main-loop.c:589
->
-> > > #37 0x0000557eb4218b01 in qemu_main_loop () at ../system/runstate.c:835
->
-> > > #38 0x0000557eb46df166 in qemu_default_main (opaque=0x0) at
->
-> > > ../system/main.c:50
->
-> > > #39 0x0000557eb46df215 in main (argc=24, argv=0x7ffd94b4f8d8) at
->
-> > > ../system/main.c:80
->
-> >
->
-> > And here's coroutine trying to acquire read lock:
->
-> >
->
-> > > (gdb) qemu coroutine reader_queue->entries.sqh_first
->
-> > > #0  0x0000557eb47d7068 in qemu_coroutine_switch (from_=0x557eb7aa48b0,
->
-> > > to_=0x7fc537fff508, action=COROUTINE_YIELD) at
->
-> > > ../util/coroutine-ucontext.c:321
->
-> > > #1  0x0000557eb47d4d4a in qemu_coroutine_yield () at
->
-> > > ../util/qemu-coroutine.c:339
->
-> > > #2  0x0000557eb47d56c8 in qemu_co_queue_wait_impl (queue=0x557eb59954c0
->
-> > > <reader_queue>, lock=0x7fc53c57de50, flags=0) at
->
-> > > ../util/qemu-coroutine-lock.c:60
->
-> > > #3  0x0000557eb461fea7 in bdrv_graph_co_rdlock () at
->
-> > > ../block/graph-lock.c:231
->
-> > > #4  0x0000557eb460c81a in graph_lockable_auto_lock (x=0x7fc53c57dee3)
->
-> > > at /home/root/src/qemu/master/include/block/graph-lock.h:213
->
-> > > #5  0x0000557eb460fa41 in blk_co_do_preadv_part
->
-> > >      (blk=0x557eb84c0810, offset=6890553344, bytes=4096,
->
-> > > qiov=0x7fc530006988, qiov_offset=0, flags=BDRV_REQ_REGISTERED_BUF) at
->
-> > > ../block/block-backend.c:1339
->
-> > > #6  0x0000557eb46104d7 in blk_aio_read_entry (opaque=0x7fc530003240) at
->
-> > > ../block/block-backend.c:1619
->
-> > > #7  0x0000557eb47d6c40 in coroutine_trampoline (i0=-1213577040,
->
-> > > i1=21886) at ../util/coroutine-ucontext.c:175
->
-> > > #8  0x00007fc547c2a360 in __start_context () at
->
-> > > ../sysdeps/unix/sysv/linux/x86_64/__start_context.S:91
->
-> > > #9  0x00007ffd94b4ea40 in  ()
->
-> > > #10 0x0000000000000000 in  ()
->
-> >
->
-> > So it looks like main thread is processing job-dismiss request and is
->
-> > holding write lock taken in block_job_remove_all_bdrv() (frame #20
->
-> > above).  At the same time iothread spawns a coroutine which performs IO
->
-> > request.  Before the coroutine is spawned, blk_aio_prwv() increases
->
-> > 'in_flight' counter for Blk.  Then blk_co_do_preadv_part() (frame #5) is
->
-> > trying to acquire the read lock.  But main thread isn't releasing the
->
-> > lock as blk_root_drained_poll() returns true since blk->in_flight > 0.
->
-> > Here's the deadlock.
->
-> >
->
-> > Any comments and suggestions on the subject are welcomed.  Thanks!
->
-> I think this is what the blk_wait_while_drained() call was supposed to
->
-> address in blk_co_do_preadv_part(). However, with the use of multiple
->
-> I/O threads, this is racy.
->
->
->
-> Do you think that in your case we hit the small race window between the
->
-> checks in blk_wait_while_drained() and GRAPH_RDLOCK_GUARD()? Or is there
->
-> another reason why blk_wait_while_drained() didn't do its job?
->
->
->
-At my opinion there is very big race window. Main thread has
->
-eaten graph write lock. After that another coroutine is stalled
->
-within GRAPH_RDLOCK_GUARD() as there is no drain at the moment and only
->
-after that main thread has started drain.
-You're right, I confused taking the write lock with draining there.
-
->
-That is why Fiona's idea is looking working. Though this would mean
->
-that normally we should always do that at the moment when we acquire
->
-write lock. May be even inside this function.
-I actually see now that not all of my graph locking patches were merged.
-At least I did have the thought that bdrv_drained_begin() must be marked
-GRAPH_UNLOCKED because it polls. That means that calling it from inside
-bdrv_try_change_aio_context() is actually forbidden (and that's the part
-I didn't see back then because it doesn't have TSA annotations).
-
-If you refactor the code to move the drain out to before the lock is
-taken, I think you end up with Fiona's patch, except you'll remove the
-forbidden inner drain and add more annotations for some functions and
-clarify the rules around them. I don't know, but I wouldn't be surprised
-if along the process we find other bugs, too.
-
-So Fiona's drain looks right to me, but we should probably approach it
-more systematically.
-
-Kevin
-
diff --git a/results/classifier/009/debug/36568044 b/results/classifier/009/debug/36568044
deleted file mode 100644
index cdc1d6312..000000000
--- a/results/classifier/009/debug/36568044
+++ /dev/null
@@ -1,4591 +0,0 @@
-debug: 0.939
-device: 0.931
-graphic: 0.931
-other: 0.930
-permissions: 0.927
-PID: 0.926
-semantic: 0.923
-performance: 0.920
-KVM: 0.914
-socket: 0.907
-vnc: 0.905
-network: 0.904
-boot: 0.895
-files: 0.884
-
-[BUG, RFC] cpr-transfer: qxl guest driver crashes after migration
-
-Hi all,
-
-We've been experimenting with cpr-transfer migration mode recently and
-have discovered the following issue with the guest QXL driver:
-
-Run migration source:
->
-EMULATOR=/path/to/emulator
->
-ROOTFS=/path/to/image
->
-QMPSOCK=/var/run/alma8qmp-src.sock
->
->
-$EMULATOR -enable-kvm \
->
--machine q35 \
->
--cpu host -smp 2 -m 2G \
->
--object
->
-memory-backend-file,id=ram0,size=2G,mem-path=/dev/shm/ram0,share=on\
->
--machine memory-backend=ram0 \
->
--machine aux-ram-share=on \
->
--drive file=$ROOTFS,media=disk,if=virtio \
->
--qmp unix:$QMPSOCK,server=on,wait=off \
->
--nographic \
->
--device qxl-vga
-Run migration target:
->
-EMULATOR=/path/to/emulator
->
-ROOTFS=/path/to/image
->
-QMPSOCK=/var/run/alma8qmp-dst.sock
->
->
->
->
-$EMULATOR -enable-kvm \
->
--machine q35 \
->
--cpu host -smp 2 -m 2G \
->
--object
->
-memory-backend-file,id=ram0,size=2G,mem-path=/dev/shm/ram0,share=on\
->
--machine memory-backend=ram0 \
->
--machine aux-ram-share=on \
->
--drive file=$ROOTFS,media=disk,if=virtio \
->
--qmp unix:$QMPSOCK,server=on,wait=off \
->
--nographic \
->
--device qxl-vga \
->
--incoming tcp:0:44444 \
->
--incoming '{"channel-type": "cpr", "addr": { "transport": "socket",
->
-"type": "unix", "path": "/var/run/alma8cpr-dst.sock"}}'
-Launch the migration:
->
-QMPSHELL=/root/src/qemu/master/scripts/qmp/qmp-shell
->
-QMPSOCK=/var/run/alma8qmp-src.sock
->
->
-$QMPSHELL -p $QMPSOCK <<EOF
->
-migrate-set-parameters mode=cpr-transfer
->
-migrate
->
-channels=[{"channel-type":"main","addr":{"transport":"socket","type":"inet","host":"0","port":"44444"}},{"channel-type":"cpr","addr":{"transport":"socket","type":"unix","path":"/var/run/alma8cpr-dst.sock"}}]
->
-EOF
-Then, after a while, QXL guest driver on target crashes spewing the
-following messages:
->
-[   73.962002] [TTM] Buffer eviction failed
->
-[   73.962072] qxl 0000:00:02.0: object_init failed for (3149824, 0x00000001)
->
-[   73.962081] [drm:qxl_alloc_bo_reserved [qxl]] *ERROR* failed to allocate
->
-VRAM BO
-That seems to be a known kernel QXL driver bug:
-https://lore.kernel.org/all/20220907094423.93581-1-min_halo@163.com/T/
-https://lore.kernel.org/lkml/ZTgydqRlK6WX_b29@eldamar.lan/
-(the latter discussion contains that reproduce script which speeds up
-the crash in the guest):
->
-#!/bin/bash
->
->
-chvt 3
->
->
-for j in $(seq 80); do
->
-echo "$(date) starting round $j"
->
-if [ "$(journalctl --boot | grep "failed to allocate VRAM BO")" != ""
->
-]; then
->
-echo "bug was reproduced after $j tries"
->
-exit 1
->
-fi
->
-for i in $(seq 100); do
->
-dmesg > /dev/tty3
->
-done
->
-done
->
->
-echo "bug could not be reproduced"
->
-exit 0
-The bug itself seems to remain unfixed, as I was able to reproduce that
-with Fedora 41 guest, as well as AlmaLinux 8 guest. However our
-cpr-transfer code also seems to be buggy as it triggers the crash -
-without the cpr-transfer migration the above reproduce doesn't lead to
-crash on the source VM.
-
-I suspect that, as cpr-transfer doesn't migrate the guest memory, but
-rather passes it through the memory backend object, our code might
-somehow corrupt the VRAM.  However, I wasn't able to trace the
-corruption so far.
-
-Could somebody help the investigation and take a look into this?  Any
-suggestions would be appreciated.  Thanks!
-
-Andrey
-
-On 2/28/2025 12:39 PM, Andrey Drobyshev wrote:
-Hi all,
-
-We've been experimenting with cpr-transfer migration mode recently and
-have discovered the following issue with the guest QXL driver:
-
-Run migration source:
-EMULATOR=/path/to/emulator
-ROOTFS=/path/to/image
-QMPSOCK=/var/run/alma8qmp-src.sock
-
-$EMULATOR -enable-kvm \
-     -machine q35 \
-     -cpu host -smp 2 -m 2G \
-     -object 
-memory-backend-file,id=ram0,size=2G,mem-path=/dev/shm/ram0,share=on\
-     -machine memory-backend=ram0 \
-     -machine aux-ram-share=on \
-     -drive file=$ROOTFS,media=disk,if=virtio \
-     -qmp unix:$QMPSOCK,server=on,wait=off \
-     -nographic \
-     -device qxl-vga
-Run migration target:
-EMULATOR=/path/to/emulator
-ROOTFS=/path/to/image
-QMPSOCK=/var/run/alma8qmp-dst.sock
-$EMULATOR -enable-kvm \
--machine q35 \
-     -cpu host -smp 2 -m 2G \
-     -object 
-memory-backend-file,id=ram0,size=2G,mem-path=/dev/shm/ram0,share=on\
-     -machine memory-backend=ram0 \
-     -machine aux-ram-share=on \
-     -drive file=$ROOTFS,media=disk,if=virtio \
-     -qmp unix:$QMPSOCK,server=on,wait=off \
-     -nographic \
-     -device qxl-vga \
-     -incoming tcp:0:44444 \
-     -incoming '{"channel-type": "cpr", "addr": { "transport": "socket", "type": "unix", 
-"path": "/var/run/alma8cpr-dst.sock"}}'
-Launch the migration:
-QMPSHELL=/root/src/qemu/master/scripts/qmp/qmp-shell
-QMPSOCK=/var/run/alma8qmp-src.sock
-
-$QMPSHELL -p $QMPSOCK <<EOF
-     migrate-set-parameters mode=cpr-transfer
-     migrate 
-channels=[{"channel-type":"main","addr":{"transport":"socket","type":"inet","host":"0","port":"44444"}},{"channel-type":"cpr","addr":{"transport":"socket","type":"unix","path":"/var/run/alma8cpr-dst.sock"}}]
-EOF
-Then, after a while, QXL guest driver on target crashes spewing the
-following messages:
-[   73.962002] [TTM] Buffer eviction failed
-[   73.962072] qxl 0000:00:02.0: object_init failed for (3149824, 0x00000001)
-[   73.962081] [drm:qxl_alloc_bo_reserved [qxl]] *ERROR* failed to allocate 
-VRAM BO
-That seems to be a known kernel QXL driver bug:
-https://lore.kernel.org/all/20220907094423.93581-1-min_halo@163.com/T/
-https://lore.kernel.org/lkml/ZTgydqRlK6WX_b29@eldamar.lan/
-(the latter discussion contains that reproduce script which speeds up
-the crash in the guest):
-#!/bin/bash
-
-chvt 3
-
-for j in $(seq 80); do
-         echo "$(date) starting round $j"
-         if [ "$(journalctl --boot | grep "failed to allocate VRAM BO")" != "" 
-]; then
-                 echo "bug was reproduced after $j tries"
-                 exit 1
-         fi
-         for i in $(seq 100); do
-                 dmesg > /dev/tty3
-         done
-done
-
-echo "bug could not be reproduced"
-exit 0
-The bug itself seems to remain unfixed, as I was able to reproduce that
-with Fedora 41 guest, as well as AlmaLinux 8 guest. However our
-cpr-transfer code also seems to be buggy as it triggers the crash -
-without the cpr-transfer migration the above reproduce doesn't lead to
-crash on the source VM.
-
-I suspect that, as cpr-transfer doesn't migrate the guest memory, but
-rather passes it through the memory backend object, our code might
-somehow corrupt the VRAM.  However, I wasn't able to trace the
-corruption so far.
-
-Could somebody help the investigation and take a look into this?  Any
-suggestions would be appreciated.  Thanks!
-Possibly some memory region created by qxl is not being preserved.
-Try adding these traces to see what is preserved:
-
--trace enable='*cpr*'
--trace enable='*ram_alloc*'
-
-- Steve
-
-On 2/28/2025 1:13 PM, Steven Sistare wrote:
-On 2/28/2025 12:39 PM, Andrey Drobyshev wrote:
-Hi all,
-
-We've been experimenting with cpr-transfer migration mode recently and
-have discovered the following issue with the guest QXL driver:
-
-Run migration source:
-EMULATOR=/path/to/emulator
-ROOTFS=/path/to/image
-QMPSOCK=/var/run/alma8qmp-src.sock
-
-$EMULATOR -enable-kvm \
-     -machine q35 \
-     -cpu host -smp 2 -m 2G \
-     -object 
-memory-backend-file,id=ram0,size=2G,mem-path=/dev/shm/ram0,share=on\
-     -machine memory-backend=ram0 \
-     -machine aux-ram-share=on \
-     -drive file=$ROOTFS,media=disk,if=virtio \
-     -qmp unix:$QMPSOCK,server=on,wait=off \
-     -nographic \
-     -device qxl-vga
-Run migration target:
-EMULATOR=/path/to/emulator
-ROOTFS=/path/to/image
-QMPSOCK=/var/run/alma8qmp-dst.sock
-$EMULATOR -enable-kvm \
-     -machine q35 \
-     -cpu host -smp 2 -m 2G \
-     -object 
-memory-backend-file,id=ram0,size=2G,mem-path=/dev/shm/ram0,share=on\
-     -machine memory-backend=ram0 \
-     -machine aux-ram-share=on \
-     -drive file=$ROOTFS,media=disk,if=virtio \
-     -qmp unix:$QMPSOCK,server=on,wait=off \
-     -nographic \
-     -device qxl-vga \
-     -incoming tcp:0:44444 \
-     -incoming '{"channel-type": "cpr", "addr": { "transport": "socket", "type": "unix", 
-"path": "/var/run/alma8cpr-dst.sock"}}'
-Launch the migration:
-QMPSHELL=/root/src/qemu/master/scripts/qmp/qmp-shell
-QMPSOCK=/var/run/alma8qmp-src.sock
-
-$QMPSHELL -p $QMPSOCK <<EOF
-     migrate-set-parameters mode=cpr-transfer
-     migrate 
-channels=[{"channel-type":"main","addr":{"transport":"socket","type":"inet","host":"0","port":"44444"}},{"channel-type":"cpr","addr":{"transport":"socket","type":"unix","path":"/var/run/alma8cpr-dst.sock"}}]
-EOF
-Then, after a while, QXL guest driver on target crashes spewing the
-following messages:
-[   73.962002] [TTM] Buffer eviction failed
-[   73.962072] qxl 0000:00:02.0: object_init failed for (3149824, 0x00000001)
-[   73.962081] [drm:qxl_alloc_bo_reserved [qxl]] *ERROR* failed to allocate 
-VRAM BO
-That seems to be a known kernel QXL driver bug:
-https://lore.kernel.org/all/20220907094423.93581-1-min_halo@163.com/T/
-https://lore.kernel.org/lkml/ZTgydqRlK6WX_b29@eldamar.lan/
-(the latter discussion contains that reproduce script which speeds up
-the crash in the guest):
-#!/bin/bash
-
-chvt 3
-
-for j in $(seq 80); do
-         echo "$(date) starting round $j"
-         if [ "$(journalctl --boot | grep "failed to allocate VRAM BO")" != "" 
-]; then
-                 echo "bug was reproduced after $j tries"
-                 exit 1
-         fi
-         for i in $(seq 100); do
-                 dmesg > /dev/tty3
-         done
-done
-
-echo "bug could not be reproduced"
-exit 0
-The bug itself seems to remain unfixed, as I was able to reproduce that
-with Fedora 41 guest, as well as AlmaLinux 8 guest. However our
-cpr-transfer code also seems to be buggy as it triggers the crash -
-without the cpr-transfer migration the above reproduce doesn't lead to
-crash on the source VM.
-
-I suspect that, as cpr-transfer doesn't migrate the guest memory, but
-rather passes it through the memory backend object, our code might
-somehow corrupt the VRAM.  However, I wasn't able to trace the
-corruption so far.
-
-Could somebody help the investigation and take a look into this?  Any
-suggestions would be appreciated.  Thanks!
-Possibly some memory region created by qxl is not being preserved.
-Try adding these traces to see what is preserved:
-
--trace enable='*cpr*'
--trace enable='*ram_alloc*'
-Also try adding this patch to see if it flags any ram blocks as not
-compatible with cpr.  A message is printed at migration start time.
-1740667681-257312-1-git-send-email-steven.sistare@oracle.com
-/">https://lore.kernel.org/qemu-devel/
-1740667681-257312-1-git-send-email-steven.sistare@oracle.com
-/
-- Steve
-
-On 2/28/25 8:20 PM, Steven Sistare wrote:
->
-On 2/28/2025 1:13 PM, Steven Sistare wrote:
->
-> On 2/28/2025 12:39 PM, Andrey Drobyshev wrote:
->
->> Hi all,
->
->>
->
->> We've been experimenting with cpr-transfer migration mode recently and
->
->> have discovered the following issue with the guest QXL driver:
->
->>
->
->> Run migration source:
->
->>> EMULATOR=/path/to/emulator
->
->>> ROOTFS=/path/to/image
->
->>> QMPSOCK=/var/run/alma8qmp-src.sock
->
->>>
->
->>> $EMULATOR -enable-kvm \
->
->>>      -machine q35 \
->
->>>      -cpu host -smp 2 -m 2G \
->
->>>      -object memory-backend-file,id=ram0,size=2G,mem-path=/dev/shm/
->
->>> ram0,share=on\
->
->>>      -machine memory-backend=ram0 \
->
->>>      -machine aux-ram-share=on \
->
->>>      -drive file=$ROOTFS,media=disk,if=virtio \
->
->>>      -qmp unix:$QMPSOCK,server=on,wait=off \
->
->>>      -nographic \
->
->>>      -device qxl-vga
->
->>
->
->> Run migration target:
->
->>> EMULATOR=/path/to/emulator
->
->>> ROOTFS=/path/to/image
->
->>> QMPSOCK=/var/run/alma8qmp-dst.sock
->
->>> $EMULATOR -enable-kvm \
->
->>>      -machine q35 \
->
->>>      -cpu host -smp 2 -m 2G \
->
->>>      -object memory-backend-file,id=ram0,size=2G,mem-path=/dev/shm/
->
->>> ram0,share=on\
->
->>>      -machine memory-backend=ram0 \
->
->>>      -machine aux-ram-share=on \
->
->>>      -drive file=$ROOTFS,media=disk,if=virtio \
->
->>>      -qmp unix:$QMPSOCK,server=on,wait=off \
->
->>>      -nographic \
->
->>>      -device qxl-vga \
->
->>>      -incoming tcp:0:44444 \
->
->>>      -incoming '{"channel-type": "cpr", "addr": { "transport":
->
->>> "socket", "type": "unix", "path": "/var/run/alma8cpr-dst.sock"}}'
->
->>
->
->>
->
->> Launch the migration:
->
->>> QMPSHELL=/root/src/qemu/master/scripts/qmp/qmp-shell
->
->>> QMPSOCK=/var/run/alma8qmp-src.sock
->
->>>
->
->>> $QMPSHELL -p $QMPSOCK <<EOF
->
->>>      migrate-set-parameters mode=cpr-transfer
->
->>>      migrate channels=[{"channel-type":"main","addr":
->
->>> {"transport":"socket","type":"inet","host":"0","port":"44444"}},
->
->>> {"channel-type":"cpr","addr":
->
->>> {"transport":"socket","type":"unix","path":"/var/run/alma8cpr-
->
->>> dst.sock"}}]
->
->>> EOF
->
->>
->
->> Then, after a while, QXL guest driver on target crashes spewing the
->
->> following messages:
->
->>> [   73.962002] [TTM] Buffer eviction failed
->
->>> [   73.962072] qxl 0000:00:02.0: object_init failed for (3149824,
->
->>> 0x00000001)
->
->>> [   73.962081] [drm:qxl_alloc_bo_reserved [qxl]] *ERROR* failed to
->
->>> allocate VRAM BO
->
->>
->
->> That seems to be a known kernel QXL driver bug:
->
->>
->
->>
-https://lore.kernel.org/all/20220907094423.93581-1-min_halo@163.com/T/
->
->>
-https://lore.kernel.org/lkml/ZTgydqRlK6WX_b29@eldamar.lan/
->
->>
->
->> (the latter discussion contains that reproduce script which speeds up
->
->> the crash in the guest):
->
->>> #!/bin/bash
->
->>>
->
->>> chvt 3
->
->>>
->
->>> for j in $(seq 80); do
->
->>>          echo "$(date) starting round $j"
->
->>>          if [ "$(journalctl --boot | grep "failed to allocate VRAM
->
->>> BO")" != "" ]; then
->
->>>                  echo "bug was reproduced after $j tries"
->
->>>                  exit 1
->
->>>          fi
->
->>>          for i in $(seq 100); do
->
->>>                  dmesg > /dev/tty3
->
->>>          done
->
->>> done
->
->>>
->
->>> echo "bug could not be reproduced"
->
->>> exit 0
->
->>
->
->> The bug itself seems to remain unfixed, as I was able to reproduce that
->
->> with Fedora 41 guest, as well as AlmaLinux 8 guest. However our
->
->> cpr-transfer code also seems to be buggy as it triggers the crash -
->
->> without the cpr-transfer migration the above reproduce doesn't lead to
->
->> crash on the source VM.
->
->>
->
->> I suspect that, as cpr-transfer doesn't migrate the guest memory, but
->
->> rather passes it through the memory backend object, our code might
->
->> somehow corrupt the VRAM.  However, I wasn't able to trace the
->
->> corruption so far.
->
->>
->
->> Could somebody help the investigation and take a look into this?  Any
->
->> suggestions would be appreciated.  Thanks!
->
->
->
-> Possibly some memory region created by qxl is not being preserved.
->
-> Try adding these traces to see what is preserved:
->
->
->
-> -trace enable='*cpr*'
->
-> -trace enable='*ram_alloc*'
->
->
-Also try adding this patch to see if it flags any ram blocks as not
->
-compatible with cpr.  A message is printed at migration start time.
->

-https://lore.kernel.org/qemu-devel/1740667681-257312-1-git-send-email-
->
-steven.sistare@oracle.com/
->
->
-- Steve
->
-With the traces enabled + the "migration: ram block cpr blockers" patch
-applied:
-
-Source:
->
-cpr_find_fd pc.bios, id 0 returns -1
->
-cpr_save_fd pc.bios, id 0, fd 22
->
-qemu_ram_alloc_shared pc.bios size 262144 max_size 262144 fd 22 host
->
-0x7fec18e00000
->
-cpr_find_fd pc.rom, id 0 returns -1
->
-cpr_save_fd pc.rom, id 0, fd 23
->
-qemu_ram_alloc_shared pc.rom size 131072 max_size 131072 fd 23 host
->
-0x7fec18c00000
->
-cpr_find_fd 0000:00:01.0/e1000e.rom, id 0 returns -1
->
-cpr_save_fd 0000:00:01.0/e1000e.rom, id 0, fd 24
->
-qemu_ram_alloc_shared 0000:00:01.0/e1000e.rom size 262144 max_size 262144 fd
->
-24 host 0x7fec18a00000
->
-cpr_find_fd 0000:00:02.0/vga.vram, id 0 returns -1
->
-cpr_save_fd 0000:00:02.0/vga.vram, id 0, fd 25
->
-qemu_ram_alloc_shared 0000:00:02.0/vga.vram size 67108864 max_size 67108864
->
-fd 25 host 0x7feb77e00000
->
-cpr_find_fd 0000:00:02.0/qxl.vrom, id 0 returns -1
->
-cpr_save_fd 0000:00:02.0/qxl.vrom, id 0, fd 27
->
-qemu_ram_alloc_shared 0000:00:02.0/qxl.vrom size 8192 max_size 8192 fd 27
->
-host 0x7fec18800000
->
-cpr_find_fd 0000:00:02.0/qxl.vram, id 0 returns -1
->
-cpr_save_fd 0000:00:02.0/qxl.vram, id 0, fd 28
->
-qemu_ram_alloc_shared 0000:00:02.0/qxl.vram size 67108864 max_size 67108864
->
-fd 28 host 0x7feb73c00000
->
-cpr_find_fd 0000:00:02.0/qxl.rom, id 0 returns -1
->
-cpr_save_fd 0000:00:02.0/qxl.rom, id 0, fd 34
->
-qemu_ram_alloc_shared 0000:00:02.0/qxl.rom size 65536 max_size 65536 fd 34
->
-host 0x7fec18600000
->
-cpr_find_fd /rom@etc/acpi/tables, id 0 returns -1
->
-cpr_save_fd /rom@etc/acpi/tables, id 0, fd 35
->
-qemu_ram_alloc_shared /rom@etc/acpi/tables size 131072 max_size 2097152 fd 35
->
-host 0x7fec18200000
->
-cpr_find_fd /rom@etc/table-loader, id 0 returns -1
->
-cpr_save_fd /rom@etc/table-loader, id 0, fd 36
->
-qemu_ram_alloc_shared /rom@etc/table-loader size 4096 max_size 65536 fd 36
->
-host 0x7feb8b600000
->
-cpr_find_fd /rom@etc/acpi/rsdp, id 0 returns -1
->
-cpr_save_fd /rom@etc/acpi/rsdp, id 0, fd 37
->
-qemu_ram_alloc_shared /rom@etc/acpi/rsdp size 4096 max_size 4096 fd 37 host
->
-0x7feb8b400000
->
->
-cpr_state_save cpr-transfer mode
->
-cpr_transfer_output /var/run/alma8cpr-dst.sock
-Target:
->
-cpr_transfer_input /var/run/alma8cpr-dst.sock
->
-cpr_state_load cpr-transfer mode
->
-cpr_find_fd pc.bios, id 0 returns 20
->
-qemu_ram_alloc_shared pc.bios size 262144 max_size 262144 fd 20 host
->
-0x7fcdc9800000
->
-cpr_find_fd pc.rom, id 0 returns 19
->
-qemu_ram_alloc_shared pc.rom size 131072 max_size 131072 fd 19 host
->
-0x7fcdc9600000
->
-cpr_find_fd 0000:00:01.0/e1000e.rom, id 0 returns 18
->
-qemu_ram_alloc_shared 0000:00:01.0/e1000e.rom size 262144 max_size 262144 fd
->
-18 host 0x7fcdc9400000
->
-cpr_find_fd 0000:00:02.0/vga.vram, id 0 returns 17
->
-qemu_ram_alloc_shared 0000:00:02.0/vga.vram size 67108864 max_size 67108864
->
-fd 17 host 0x7fcd27e00000
->
-cpr_find_fd 0000:00:02.0/qxl.vrom, id 0 returns 16
->
-qemu_ram_alloc_shared 0000:00:02.0/qxl.vrom size 8192 max_size 8192 fd 16
->
-host 0x7fcdc9200000
->
-cpr_find_fd 0000:00:02.0/qxl.vram, id 0 returns 15
->
-qemu_ram_alloc_shared 0000:00:02.0/qxl.vram size 67108864 max_size 67108864
->
-fd 15 host 0x7fcd23c00000
->
-cpr_find_fd 0000:00:02.0/qxl.rom, id 0 returns 14
->
-qemu_ram_alloc_shared 0000:00:02.0/qxl.rom size 65536 max_size 65536 fd 14
->
-host 0x7fcdc8800000
->
-cpr_find_fd /rom@etc/acpi/tables, id 0 returns 13
->
-qemu_ram_alloc_shared /rom@etc/acpi/tables size 131072 max_size 2097152 fd 13
->
-host 0x7fcdc8400000
->
-cpr_find_fd /rom@etc/table-loader, id 0 returns 11
->
-qemu_ram_alloc_shared /rom@etc/table-loader size 4096 max_size 65536 fd 11
->
-host 0x7fcdc8200000
->
-cpr_find_fd /rom@etc/acpi/rsdp, id 0 returns 10
->
-qemu_ram_alloc_shared /rom@etc/acpi/rsdp size 4096 max_size 4096 fd 10 host
->
-0x7fcd3be00000
-Looks like both vga.vram and qxl.vram are being preserved (with the same
-addresses), and no incompatible ram blocks are found during migration.
-
-Andrey
-
-On 2/28/25 8:35 PM, Andrey Drobyshev wrote:
->
-On 2/28/25 8:20 PM, Steven Sistare wrote:
->
-> On 2/28/2025 1:13 PM, Steven Sistare wrote:
->
->> On 2/28/2025 12:39 PM, Andrey Drobyshev wrote:
->
->>> Hi all,
->
->>>
->
->>> We've been experimenting with cpr-transfer migration mode recently and
->
->>> have discovered the following issue with the guest QXL driver:
->
->>>
->
->>> Run migration source:
->
->>>> EMULATOR=/path/to/emulator
->
->>>> ROOTFS=/path/to/image
->
->>>> QMPSOCK=/var/run/alma8qmp-src.sock
->
->>>>
->
->>>> $EMULATOR -enable-kvm \
->
->>>>      -machine q35 \
->
->>>>      -cpu host -smp 2 -m 2G \
->
->>>>      -object memory-backend-file,id=ram0,size=2G,mem-path=/dev/shm/
->
->>>> ram0,share=on\
->
->>>>      -machine memory-backend=ram0 \
->
->>>>      -machine aux-ram-share=on \
->
->>>>      -drive file=$ROOTFS,media=disk,if=virtio \
->
->>>>      -qmp unix:$QMPSOCK,server=on,wait=off \
->
->>>>      -nographic \
->
->>>>      -device qxl-vga
->
->>>
->
->>> Run migration target:
->
->>>> EMULATOR=/path/to/emulator
->
->>>> ROOTFS=/path/to/image
->
->>>> QMPSOCK=/var/run/alma8qmp-dst.sock
->
->>>> $EMULATOR -enable-kvm \
->
->>>>      -machine q35 \
->
->>>>      -cpu host -smp 2 -m 2G \
->
->>>>      -object memory-backend-file,id=ram0,size=2G,mem-path=/dev/shm/
->
->>>> ram0,share=on\
->
->>>>      -machine memory-backend=ram0 \
->
->>>>      -machine aux-ram-share=on \
->
->>>>      -drive file=$ROOTFS,media=disk,if=virtio \
->
->>>>      -qmp unix:$QMPSOCK,server=on,wait=off \
->
->>>>      -nographic \
->
->>>>      -device qxl-vga \
->
->>>>      -incoming tcp:0:44444 \
->
->>>>      -incoming '{"channel-type": "cpr", "addr": { "transport":
->
->>>> "socket", "type": "unix", "path": "/var/run/alma8cpr-dst.sock"}}'
->
->>>
->
->>>
->
->>> Launch the migration:
->
->>>> QMPSHELL=/root/src/qemu/master/scripts/qmp/qmp-shell
->
->>>> QMPSOCK=/var/run/alma8qmp-src.sock
->
->>>>
->
->>>> $QMPSHELL -p $QMPSOCK <<EOF
->
->>>>      migrate-set-parameters mode=cpr-transfer
->
->>>>      migrate channels=[{"channel-type":"main","addr":
->
->>>> {"transport":"socket","type":"inet","host":"0","port":"44444"}},
->
->>>> {"channel-type":"cpr","addr":
->
->>>> {"transport":"socket","type":"unix","path":"/var/run/alma8cpr-
->
->>>> dst.sock"}}]
->
->>>> EOF
->
->>>
->
->>> Then, after a while, QXL guest driver on target crashes spewing the
->
->>> following messages:
->
->>>> [   73.962002] [TTM] Buffer eviction failed
->
->>>> [   73.962072] qxl 0000:00:02.0: object_init failed for (3149824,
->
->>>> 0x00000001)
->
->>>> [   73.962081] [drm:qxl_alloc_bo_reserved [qxl]] *ERROR* failed to
->
->>>> allocate VRAM BO
->
->>>
->
->>> That seems to be a known kernel QXL driver bug:
->
->>>
->
->>>
-https://lore.kernel.org/all/20220907094423.93581-1-min_halo@163.com/T/
->
->>>
-https://lore.kernel.org/lkml/ZTgydqRlK6WX_b29@eldamar.lan/
->
->>>
->
->>> (the latter discussion contains that reproduce script which speeds up
->
->>> the crash in the guest):
->
->>>> #!/bin/bash
->
->>>>
->
->>>> chvt 3
->
->>>>
->
->>>> for j in $(seq 80); do
->
->>>>          echo "$(date) starting round $j"
->
->>>>          if [ "$(journalctl --boot | grep "failed to allocate VRAM
->
->>>> BO")" != "" ]; then
->
->>>>                  echo "bug was reproduced after $j tries"
->
->>>>                  exit 1
->
->>>>          fi
->
->>>>          for i in $(seq 100); do
->
->>>>                  dmesg > /dev/tty3
->
->>>>          done
->
->>>> done
->
->>>>
->
->>>> echo "bug could not be reproduced"
->
->>>> exit 0
->
->>>
->
->>> The bug itself seems to remain unfixed, as I was able to reproduce that
->
->>> with Fedora 41 guest, as well as AlmaLinux 8 guest. However our
->
->>> cpr-transfer code also seems to be buggy as it triggers the crash -
->
->>> without the cpr-transfer migration the above reproduce doesn't lead to
->
->>> crash on the source VM.
->
->>>
->
->>> I suspect that, as cpr-transfer doesn't migrate the guest memory, but
->
->>> rather passes it through the memory backend object, our code might
->
->>> somehow corrupt the VRAM.  However, I wasn't able to trace the
->
->>> corruption so far.
->
->>>
->
->>> Could somebody help the investigation and take a look into this?  Any
->
->>> suggestions would be appreciated.  Thanks!
->
->>
->
->> Possibly some memory region created by qxl is not being preserved.
->
->> Try adding these traces to see what is preserved:
->
->>
->
->> -trace enable='*cpr*'
->
->> -trace enable='*ram_alloc*'
->
->
->
-> Also try adding this patch to see if it flags any ram blocks as not
->
-> compatible with cpr.  A message is printed at migration start time.
->
-> Â
-https://lore.kernel.org/qemu-devel/1740667681-257312-1-git-send-email-
->
-> steven.sistare@oracle.com/
->
->
->
-> - Steve
->
->
->
->
-With the traces enabled + the "migration: ram block cpr blockers" patch
->
-applied:
->
->
-Source:
->
-> cpr_find_fd pc.bios, id 0 returns -1
->
-> cpr_save_fd pc.bios, id 0, fd 22
->
-> qemu_ram_alloc_shared pc.bios size 262144 max_size 262144 fd 22 host
->
-> 0x7fec18e00000
->
-> cpr_find_fd pc.rom, id 0 returns -1
->
-> cpr_save_fd pc.rom, id 0, fd 23
->
-> qemu_ram_alloc_shared pc.rom size 131072 max_size 131072 fd 23 host
->
-> 0x7fec18c00000
->
-> cpr_find_fd 0000:00:01.0/e1000e.rom, id 0 returns -1
->
-> cpr_save_fd 0000:00:01.0/e1000e.rom, id 0, fd 24
->
-> qemu_ram_alloc_shared 0000:00:01.0/e1000e.rom size 262144 max_size 262144 fd
->
-> 24 host 0x7fec18a00000
->
-> cpr_find_fd 0000:00:02.0/vga.vram, id 0 returns -1
->
-> cpr_save_fd 0000:00:02.0/vga.vram, id 0, fd 25
->
-> qemu_ram_alloc_shared 0000:00:02.0/vga.vram size 67108864 max_size 67108864
->
-> fd 25 host 0x7feb77e00000
->
-> cpr_find_fd 0000:00:02.0/qxl.vrom, id 0 returns -1
->
-> cpr_save_fd 0000:00:02.0/qxl.vrom, id 0, fd 27
->
-> qemu_ram_alloc_shared 0000:00:02.0/qxl.vrom size 8192 max_size 8192 fd 27
->
-> host 0x7fec18800000
->
-> cpr_find_fd 0000:00:02.0/qxl.vram, id 0 returns -1
->
-> cpr_save_fd 0000:00:02.0/qxl.vram, id 0, fd 28
->
-> qemu_ram_alloc_shared 0000:00:02.0/qxl.vram size 67108864 max_size 67108864
->
-> fd 28 host 0x7feb73c00000
->
-> cpr_find_fd 0000:00:02.0/qxl.rom, id 0 returns -1
->
-> cpr_save_fd 0000:00:02.0/qxl.rom, id 0, fd 34
->
-> qemu_ram_alloc_shared 0000:00:02.0/qxl.rom size 65536 max_size 65536 fd 34
->
-> host 0x7fec18600000
->
-> cpr_find_fd /rom@etc/acpi/tables, id 0 returns -1
->
-> cpr_save_fd /rom@etc/acpi/tables, id 0, fd 35
->
-> qemu_ram_alloc_shared /rom@etc/acpi/tables size 131072 max_size 2097152 fd
->
-> 35 host 0x7fec18200000
->
-> cpr_find_fd /rom@etc/table-loader, id 0 returns -1
->
-> cpr_save_fd /rom@etc/table-loader, id 0, fd 36
->
-> qemu_ram_alloc_shared /rom@etc/table-loader size 4096 max_size 65536 fd 36
->
-> host 0x7feb8b600000
->
-> cpr_find_fd /rom@etc/acpi/rsdp, id 0 returns -1
->
-> cpr_save_fd /rom@etc/acpi/rsdp, id 0, fd 37
->
-> qemu_ram_alloc_shared /rom@etc/acpi/rsdp size 4096 max_size 4096 fd 37 host
->
-> 0x7feb8b400000
->
->
->
-> cpr_state_save cpr-transfer mode
->
-> cpr_transfer_output /var/run/alma8cpr-dst.sock
->
->
-Target:
->
-> cpr_transfer_input /var/run/alma8cpr-dst.sock
->
-> cpr_state_load cpr-transfer mode
->
-> cpr_find_fd pc.bios, id 0 returns 20
->
-> qemu_ram_alloc_shared pc.bios size 262144 max_size 262144 fd 20 host
->
-> 0x7fcdc9800000
->
-> cpr_find_fd pc.rom, id 0 returns 19
->
-> qemu_ram_alloc_shared pc.rom size 131072 max_size 131072 fd 19 host
->
-> 0x7fcdc9600000
->
-> cpr_find_fd 0000:00:01.0/e1000e.rom, id 0 returns 18
->
-> qemu_ram_alloc_shared 0000:00:01.0/e1000e.rom size 262144 max_size 262144 fd
->
-> 18 host 0x7fcdc9400000
->
-> cpr_find_fd 0000:00:02.0/vga.vram, id 0 returns 17
->
-> qemu_ram_alloc_shared 0000:00:02.0/vga.vram size 67108864 max_size 67108864
->
-> fd 17 host 0x7fcd27e00000
->
-> cpr_find_fd 0000:00:02.0/qxl.vrom, id 0 returns 16
->
-> qemu_ram_alloc_shared 0000:00:02.0/qxl.vrom size 8192 max_size 8192 fd 16
->
-> host 0x7fcdc9200000
->
-> cpr_find_fd 0000:00:02.0/qxl.vram, id 0 returns 15
->
-> qemu_ram_alloc_shared 0000:00:02.0/qxl.vram size 67108864 max_size 67108864
->
-> fd 15 host 0x7fcd23c00000
->
-> cpr_find_fd 0000:00:02.0/qxl.rom, id 0 returns 14
->
-> qemu_ram_alloc_shared 0000:00:02.0/qxl.rom size 65536 max_size 65536 fd 14
->
-> host 0x7fcdc8800000
->
-> cpr_find_fd /rom@etc/acpi/tables, id 0 returns 13
->
-> qemu_ram_alloc_shared /rom@etc/acpi/tables size 131072 max_size 2097152 fd
->
-> 13 host 0x7fcdc8400000
->
-> cpr_find_fd /rom@etc/table-loader, id 0 returns 11
->
-> qemu_ram_alloc_shared /rom@etc/table-loader size 4096 max_size 65536 fd 11
->
-> host 0x7fcdc8200000
->
-> cpr_find_fd /rom@etc/acpi/rsdp, id 0 returns 10
->
-> qemu_ram_alloc_shared /rom@etc/acpi/rsdp size 4096 max_size 4096 fd 10 host
->
-> 0x7fcd3be00000
->
->
-Looks like both vga.vram and qxl.vram are being preserved (with the same
->
-addresses), and no incompatible ram blocks are found during migration.
->
-Sorry, addressed are not the same, of course.  However corresponding ram
-blocks do seem to be preserved and initialized.
-
-On 2/28/2025 1:37 PM, Andrey Drobyshev wrote:
-On 2/28/25 8:35 PM, Andrey Drobyshev wrote:
-On 2/28/25 8:20 PM, Steven Sistare wrote:
-On 2/28/2025 1:13 PM, Steven Sistare wrote:
-On 2/28/2025 12:39 PM, Andrey Drobyshev wrote:
-Hi all,
-
-We've been experimenting with cpr-transfer migration mode recently and
-have discovered the following issue with the guest QXL driver:
-
-Run migration source:
-EMULATOR=/path/to/emulator
-ROOTFS=/path/to/image
-QMPSOCK=/var/run/alma8qmp-src.sock
-
-$EMULATOR -enable-kvm \
-      -machine q35 \
-      -cpu host -smp 2 -m 2G \
-      -object memory-backend-file,id=ram0,size=2G,mem-path=/dev/shm/
-ram0,share=on\
-      -machine memory-backend=ram0 \
-      -machine aux-ram-share=on \
-      -drive file=$ROOTFS,media=disk,if=virtio \
-      -qmp unix:$QMPSOCK,server=on,wait=off \
-      -nographic \
-      -device qxl-vga
-Run migration target:
-EMULATOR=/path/to/emulator
-ROOTFS=/path/to/image
-QMPSOCK=/var/run/alma8qmp-dst.sock
-$EMULATOR -enable-kvm \
-      -machine q35 \
-      -cpu host -smp 2 -m 2G \
-      -object memory-backend-file,id=ram0,size=2G,mem-path=/dev/shm/
-ram0,share=on\
-      -machine memory-backend=ram0 \
-      -machine aux-ram-share=on \
-      -drive file=$ROOTFS,media=disk,if=virtio \
-      -qmp unix:$QMPSOCK,server=on,wait=off \
-      -nographic \
-      -device qxl-vga \
-      -incoming tcp:0:44444 \
-      -incoming '{"channel-type": "cpr", "addr": { "transport":
-"socket", "type": "unix", "path": "/var/run/alma8cpr-dst.sock"}}'
-Launch the migration:
-QMPSHELL=/root/src/qemu/master/scripts/qmp/qmp-shell
-QMPSOCK=/var/run/alma8qmp-src.sock
-
-$QMPSHELL -p $QMPSOCK <<EOF
-      migrate-set-parameters mode=cpr-transfer
-      migrate channels=[{"channel-type":"main","addr":
-{"transport":"socket","type":"inet","host":"0","port":"44444"}},
-{"channel-type":"cpr","addr":
-{"transport":"socket","type":"unix","path":"/var/run/alma8cpr-
-dst.sock"}}]
-EOF
-Then, after a while, QXL guest driver on target crashes spewing the
-following messages:
-[   73.962002] [TTM] Buffer eviction failed
-[   73.962072] qxl 0000:00:02.0: object_init failed for (3149824,
-0x00000001)
-[   73.962081] [drm:qxl_alloc_bo_reserved [qxl]] *ERROR* failed to
-allocate VRAM BO
-That seems to be a known kernel QXL driver bug:
-https://lore.kernel.org/all/20220907094423.93581-1-min_halo@163.com/T/
-https://lore.kernel.org/lkml/ZTgydqRlK6WX_b29@eldamar.lan/
-(the latter discussion contains that reproduce script which speeds up
-the crash in the guest):
-#!/bin/bash
-
-chvt 3
-
-for j in $(seq 80); do
-          echo "$(date) starting round $j"
-          if [ "$(journalctl --boot | grep "failed to allocate VRAM
-BO")" != "" ]; then
-                  echo "bug was reproduced after $j tries"
-                  exit 1
-          fi
-          for i in $(seq 100); do
-                  dmesg > /dev/tty3
-          done
-done
-
-echo "bug could not be reproduced"
-exit 0
-The bug itself seems to remain unfixed, as I was able to reproduce that
-with Fedora 41 guest, as well as AlmaLinux 8 guest. However our
-cpr-transfer code also seems to be buggy as it triggers the crash -
-without the cpr-transfer migration the above reproduce doesn't lead to
-crash on the source VM.
-
-I suspect that, as cpr-transfer doesn't migrate the guest memory, but
-rather passes it through the memory backend object, our code might
-somehow corrupt the VRAM.  However, I wasn't able to trace the
-corruption so far.
-
-Could somebody help the investigation and take a look into this?  Any
-suggestions would be appreciated.  Thanks!
-Possibly some memory region created by qxl is not being preserved.
-Try adding these traces to see what is preserved:
-
--trace enable='*cpr*'
--trace enable='*ram_alloc*'
-Also try adding this patch to see if it flags any ram blocks as not
-compatible with cpr.  A message is printed at migration start time.
- Â
-https://lore.kernel.org/qemu-devel/1740667681-257312-1-git-send-email-
-steven.sistare@oracle.com/
-
-- Steve
-With the traces enabled + the "migration: ram block cpr blockers" patch
-applied:
-
-Source:
-cpr_find_fd pc.bios, id 0 returns -1
-cpr_save_fd pc.bios, id 0, fd 22
-qemu_ram_alloc_shared pc.bios size 262144 max_size 262144 fd 22 host 
-0x7fec18e00000
-cpr_find_fd pc.rom, id 0 returns -1
-cpr_save_fd pc.rom, id 0, fd 23
-qemu_ram_alloc_shared pc.rom size 131072 max_size 131072 fd 23 host 
-0x7fec18c00000
-cpr_find_fd 0000:00:01.0/e1000e.rom, id 0 returns -1
-cpr_save_fd 0000:00:01.0/e1000e.rom, id 0, fd 24
-qemu_ram_alloc_shared 0000:00:01.0/e1000e.rom size 262144 max_size 262144 fd 24 
-host 0x7fec18a00000
-cpr_find_fd 0000:00:02.0/vga.vram, id 0 returns -1
-cpr_save_fd 0000:00:02.0/vga.vram, id 0, fd 25
-qemu_ram_alloc_shared 0000:00:02.0/vga.vram size 67108864 max_size 67108864 fd 
-25 host 0x7feb77e00000
-cpr_find_fd 0000:00:02.0/qxl.vrom, id 0 returns -1
-cpr_save_fd 0000:00:02.0/qxl.vrom, id 0, fd 27
-qemu_ram_alloc_shared 0000:00:02.0/qxl.vrom size 8192 max_size 8192 fd 27 host 
-0x7fec18800000
-cpr_find_fd 0000:00:02.0/qxl.vram, id 0 returns -1
-cpr_save_fd 0000:00:02.0/qxl.vram, id 0, fd 28
-qemu_ram_alloc_shared 0000:00:02.0/qxl.vram size 67108864 max_size 67108864 fd 
-28 host 0x7feb73c00000
-cpr_find_fd 0000:00:02.0/qxl.rom, id 0 returns -1
-cpr_save_fd 0000:00:02.0/qxl.rom, id 0, fd 34
-qemu_ram_alloc_shared 0000:00:02.0/qxl.rom size 65536 max_size 65536 fd 34 host 
-0x7fec18600000
-cpr_find_fd /rom@etc/acpi/tables, id 0 returns -1
-cpr_save_fd /rom@etc/acpi/tables, id 0, fd 35
-qemu_ram_alloc_shared /rom@etc/acpi/tables size 131072 max_size 2097152 fd 35 
-host 0x7fec18200000
-cpr_find_fd /rom@etc/table-loader, id 0 returns -1
-cpr_save_fd /rom@etc/table-loader, id 0, fd 36
-qemu_ram_alloc_shared /rom@etc/table-loader size 4096 max_size 65536 fd 36 host 
-0x7feb8b600000
-cpr_find_fd /rom@etc/acpi/rsdp, id 0 returns -1
-cpr_save_fd /rom@etc/acpi/rsdp, id 0, fd 37
-qemu_ram_alloc_shared /rom@etc/acpi/rsdp size 4096 max_size 4096 fd 37 host 
-0x7feb8b400000
-
-cpr_state_save cpr-transfer mode
-cpr_transfer_output /var/run/alma8cpr-dst.sock
-Target:
-cpr_transfer_input /var/run/alma8cpr-dst.sock
-cpr_state_load cpr-transfer mode
-cpr_find_fd pc.bios, id 0 returns 20
-qemu_ram_alloc_shared pc.bios size 262144 max_size 262144 fd 20 host 
-0x7fcdc9800000
-cpr_find_fd pc.rom, id 0 returns 19
-qemu_ram_alloc_shared pc.rom size 131072 max_size 131072 fd 19 host 
-0x7fcdc9600000
-cpr_find_fd 0000:00:01.0/e1000e.rom, id 0 returns 18
-qemu_ram_alloc_shared 0000:00:01.0/e1000e.rom size 262144 max_size 262144 fd 18 
-host 0x7fcdc9400000
-cpr_find_fd 0000:00:02.0/vga.vram, id 0 returns 17
-qemu_ram_alloc_shared 0000:00:02.0/vga.vram size 67108864 max_size 67108864 fd 
-17 host 0x7fcd27e00000
-cpr_find_fd 0000:00:02.0/qxl.vrom, id 0 returns 16
-qemu_ram_alloc_shared 0000:00:02.0/qxl.vrom size 8192 max_size 8192 fd 16 host 
-0x7fcdc9200000
-cpr_find_fd 0000:00:02.0/qxl.vram, id 0 returns 15
-qemu_ram_alloc_shared 0000:00:02.0/qxl.vram size 67108864 max_size 67108864 fd 
-15 host 0x7fcd23c00000
-cpr_find_fd 0000:00:02.0/qxl.rom, id 0 returns 14
-qemu_ram_alloc_shared 0000:00:02.0/qxl.rom size 65536 max_size 65536 fd 14 host 
-0x7fcdc8800000
-cpr_find_fd /rom@etc/acpi/tables, id 0 returns 13
-qemu_ram_alloc_shared /rom@etc/acpi/tables size 131072 max_size 2097152 fd 13 
-host 0x7fcdc8400000
-cpr_find_fd /rom@etc/table-loader, id 0 returns 11
-qemu_ram_alloc_shared /rom@etc/table-loader size 4096 max_size 65536 fd 11 host 
-0x7fcdc8200000
-cpr_find_fd /rom@etc/acpi/rsdp, id 0 returns 10
-qemu_ram_alloc_shared /rom@etc/acpi/rsdp size 4096 max_size 4096 fd 10 host 
-0x7fcd3be00000
-Looks like both vga.vram and qxl.vram are being preserved (with the same
-addresses), and no incompatible ram blocks are found during migration.
-Sorry, addressed are not the same, of course.  However corresponding ram
-blocks do seem to be preserved and initialized.
-So far, I have not reproduced the guest driver failure.
-
-However, I have isolated places where new QEMU improperly writes to
-the qxl memory regions prior to starting the guest, by mmap'ing them
-readonly after cpr:
-
-  qemu_ram_alloc_internal()
-    if (reused && (strstr(name, "qxl") || strstr("name", "vga")))
-        ram_flags |= RAM_READONLY;
-    new_block = qemu_ram_alloc_from_fd(...)
-
-I have attached a draft fix; try it and let me know.
-My console window looks fine before and after cpr, using
--vnc $hostip:0 -vga qxl
-
-- Steve
-0001-hw-qxl-cpr-support-preliminary.patch
-Description:
-Text document
-
-On 3/4/25 9:05 PM, Steven Sistare wrote:
->
-On 2/28/2025 1:37 PM, Andrey Drobyshev wrote:
->
-> On 2/28/25 8:35 PM, Andrey Drobyshev wrote:
->
->> On 2/28/25 8:20 PM, Steven Sistare wrote:
->
->>> On 2/28/2025 1:13 PM, Steven Sistare wrote:
->
->>>> On 2/28/2025 12:39 PM, Andrey Drobyshev wrote:
->
->>>>> Hi all,
->
->>>>>
->
->>>>> We've been experimenting with cpr-transfer migration mode recently
->
->>>>> and
->
->>>>> have discovered the following issue with the guest QXL driver:
->
->>>>>
->
->>>>> Run migration source:
->
->>>>>> EMULATOR=/path/to/emulator
->
->>>>>> ROOTFS=/path/to/image
->
->>>>>> QMPSOCK=/var/run/alma8qmp-src.sock
->
->>>>>>
->
->>>>>> $EMULATOR -enable-kvm \
->
->>>>>>       -machine q35 \
->
->>>>>>       -cpu host -smp 2 -m 2G \
->
->>>>>>       -object memory-backend-file,id=ram0,size=2G,mem-path=/dev/shm/
->
->>>>>> ram0,share=on\
->
->>>>>>       -machine memory-backend=ram0 \
->
->>>>>>       -machine aux-ram-share=on \
->
->>>>>>       -drive file=$ROOTFS,media=disk,if=virtio \
->
->>>>>>       -qmp unix:$QMPSOCK,server=on,wait=off \
->
->>>>>>       -nographic \
->
->>>>>>       -device qxl-vga
->
->>>>>
->
->>>>> Run migration target:
->
->>>>>> EMULATOR=/path/to/emulator
->
->>>>>> ROOTFS=/path/to/image
->
->>>>>> QMPSOCK=/var/run/alma8qmp-dst.sock
->
->>>>>> $EMULATOR -enable-kvm \
->
->>>>>>       -machine q35 \
->
->>>>>>       -cpu host -smp 2 -m 2G \
->
->>>>>>       -object memory-backend-file,id=ram0,size=2G,mem-path=/dev/shm/
->
->>>>>> ram0,share=on\
->
->>>>>>       -machine memory-backend=ram0 \
->
->>>>>>       -machine aux-ram-share=on \
->
->>>>>>       -drive file=$ROOTFS,media=disk,if=virtio \
->
->>>>>>       -qmp unix:$QMPSOCK,server=on,wait=off \
->
->>>>>>       -nographic \
->
->>>>>>       -device qxl-vga \
->
->>>>>>       -incoming tcp:0:44444 \
->
->>>>>>       -incoming '{"channel-type": "cpr", "addr": { "transport":
->
->>>>>> "socket", "type": "unix", "path": "/var/run/alma8cpr-dst.sock"}}'
->
->>>>>
->
->>>>>
->
->>>>> Launch the migration:
->
->>>>>> QMPSHELL=/root/src/qemu/master/scripts/qmp/qmp-shell
->
->>>>>> QMPSOCK=/var/run/alma8qmp-src.sock
->
->>>>>>
->
->>>>>> $QMPSHELL -p $QMPSOCK <<EOF
->
->>>>>>       migrate-set-parameters mode=cpr-transfer
->
->>>>>>       migrate channels=[{"channel-type":"main","addr":
->
->>>>>> {"transport":"socket","type":"inet","host":"0","port":"44444"}},
->
->>>>>> {"channel-type":"cpr","addr":
->
->>>>>> {"transport":"socket","type":"unix","path":"/var/run/alma8cpr-
->
->>>>>> dst.sock"}}]
->
->>>>>> EOF
->
->>>>>
->
->>>>> Then, after a while, QXL guest driver on target crashes spewing the
->
->>>>> following messages:
->
->>>>>> [   73.962002] [TTM] Buffer eviction failed
->
->>>>>> [   73.962072] qxl 0000:00:02.0: object_init failed for (3149824,
->
->>>>>> 0x00000001)
->
->>>>>> [   73.962081] [drm:qxl_alloc_bo_reserved [qxl]] *ERROR* failed to
->
->>>>>> allocate VRAM BO
->
->>>>>
->
->>>>> That seems to be a known kernel QXL driver bug:
->
->>>>>
->
->>>>>
-https://lore.kernel.org/all/20220907094423.93581-1-
->
->>>>> min_halo@163.com/T/
->
->>>>>
-https://lore.kernel.org/lkml/ZTgydqRlK6WX_b29@eldamar.lan/
->
->>>>>
->
->>>>> (the latter discussion contains that reproduce script which speeds up
->
->>>>> the crash in the guest):
->
->>>>>> #!/bin/bash
->
->>>>>>
->
->>>>>> chvt 3
->
->>>>>>
->
->>>>>> for j in $(seq 80); do
->
->>>>>>           echo "$(date) starting round $j"
->
->>>>>>           if [ "$(journalctl --boot | grep "failed to allocate VRAM
->
->>>>>> BO")" != "" ]; then
->
->>>>>>                   echo "bug was reproduced after $j tries"
->
->>>>>>                   exit 1
->
->>>>>>           fi
->
->>>>>>           for i in $(seq 100); do
->
->>>>>>                   dmesg > /dev/tty3
->
->>>>>>           done
->
->>>>>> done
->
->>>>>>
->
->>>>>> echo "bug could not be reproduced"
->
->>>>>> exit 0
->
->>>>>
->
->>>>> The bug itself seems to remain unfixed, as I was able to reproduce
->
->>>>> that
->
->>>>> with Fedora 41 guest, as well as AlmaLinux 8 guest. However our
->
->>>>> cpr-transfer code also seems to be buggy as it triggers the crash -
->
->>>>> without the cpr-transfer migration the above reproduce doesn't
->
->>>>> lead to
->
->>>>> crash on the source VM.
->
->>>>>
->
->>>>> I suspect that, as cpr-transfer doesn't migrate the guest memory, but
->
->>>>> rather passes it through the memory backend object, our code might
->
->>>>> somehow corrupt the VRAM.  However, I wasn't able to trace the
->
->>>>> corruption so far.
->
->>>>>
->
->>>>> Could somebody help the investigation and take a look into this?  Any
->
->>>>> suggestions would be appreciated.  Thanks!
->
->>>>
->
->>>> Possibly some memory region created by qxl is not being preserved.
->
->>>> Try adding these traces to see what is preserved:
->
->>>>
->
->>>> -trace enable='*cpr*'
->
->>>> -trace enable='*ram_alloc*'
->
->>>
->
->>> Also try adding this patch to see if it flags any ram blocks as not
->
->>> compatible with cpr.  A message is printed at migration start time.
->
->>>  Â
-https://lore.kernel.org/qemu-devel/1740667681-257312-1-git-send-
->
->>> email-
->
->>> steven.sistare@oracle.com/
->
->>>
->
->>> - Steve
->
->>>
->
->>
->
->> With the traces enabled + the "migration: ram block cpr blockers" patch
->
->> applied:
->
->>
->
->> Source:
->
->>> cpr_find_fd pc.bios, id 0 returns -1
->
->>> cpr_save_fd pc.bios, id 0, fd 22
->
->>> qemu_ram_alloc_shared pc.bios size 262144 max_size 262144 fd 22 host
->
->>> 0x7fec18e00000
->
->>> cpr_find_fd pc.rom, id 0 returns -1
->
->>> cpr_save_fd pc.rom, id 0, fd 23
->
->>> qemu_ram_alloc_shared pc.rom size 131072 max_size 131072 fd 23 host
->
->>> 0x7fec18c00000
->
->>> cpr_find_fd 0000:00:01.0/e1000e.rom, id 0 returns -1
->
->>> cpr_save_fd 0000:00:01.0/e1000e.rom, id 0, fd 24
->
->>> qemu_ram_alloc_shared 0000:00:01.0/e1000e.rom size 262144 max_size
->
->>> 262144 fd 24 host 0x7fec18a00000
->
->>> cpr_find_fd 0000:00:02.0/vga.vram, id 0 returns -1
->
->>> cpr_save_fd 0000:00:02.0/vga.vram, id 0, fd 25
->
->>> qemu_ram_alloc_shared 0000:00:02.0/vga.vram size 67108864 max_size
->
->>> 67108864 fd 25 host 0x7feb77e00000
->
->>> cpr_find_fd 0000:00:02.0/qxl.vrom, id 0 returns -1
->
->>> cpr_save_fd 0000:00:02.0/qxl.vrom, id 0, fd 27
->
->>> qemu_ram_alloc_shared 0000:00:02.0/qxl.vrom size 8192 max_size 8192
->
->>> fd 27 host 0x7fec18800000
->
->>> cpr_find_fd 0000:00:02.0/qxl.vram, id 0 returns -1
->
->>> cpr_save_fd 0000:00:02.0/qxl.vram, id 0, fd 28
->
->>> qemu_ram_alloc_shared 0000:00:02.0/qxl.vram size 67108864 max_size
->
->>> 67108864 fd 28 host 0x7feb73c00000
->
->>> cpr_find_fd 0000:00:02.0/qxl.rom, id 0 returns -1
->
->>> cpr_save_fd 0000:00:02.0/qxl.rom, id 0, fd 34
->
->>> qemu_ram_alloc_shared 0000:00:02.0/qxl.rom size 65536 max_size 65536
->
->>> fd 34 host 0x7fec18600000
->
->>> cpr_find_fd /rom@etc/acpi/tables, id 0 returns -1
->
->>> cpr_save_fd /rom@etc/acpi/tables, id 0, fd 35
->
->>> qemu_ram_alloc_shared /rom@etc/acpi/tables size 131072 max_size
->
->>> 2097152 fd 35 host 0x7fec18200000
->
->>> cpr_find_fd /rom@etc/table-loader, id 0 returns -1
->
->>> cpr_save_fd /rom@etc/table-loader, id 0, fd 36
->
->>> qemu_ram_alloc_shared /rom@etc/table-loader size 4096 max_size 65536
->
->>> fd 36 host 0x7feb8b600000
->
->>> cpr_find_fd /rom@etc/acpi/rsdp, id 0 returns -1
->
->>> cpr_save_fd /rom@etc/acpi/rsdp, id 0, fd 37
->
->>> qemu_ram_alloc_shared /rom@etc/acpi/rsdp size 4096 max_size 4096 fd
->
->>> 37 host 0x7feb8b400000
->
->>>
->
->>> cpr_state_save cpr-transfer mode
->
->>> cpr_transfer_output /var/run/alma8cpr-dst.sock
->
->>
->
->> Target:
->
->>> cpr_transfer_input /var/run/alma8cpr-dst.sock
->
->>> cpr_state_load cpr-transfer mode
->
->>> cpr_find_fd pc.bios, id 0 returns 20
->
->>> qemu_ram_alloc_shared pc.bios size 262144 max_size 262144 fd 20 host
->
->>> 0x7fcdc9800000
->
->>> cpr_find_fd pc.rom, id 0 returns 19
->
->>> qemu_ram_alloc_shared pc.rom size 131072 max_size 131072 fd 19 host
->
->>> 0x7fcdc9600000
->
->>> cpr_find_fd 0000:00:01.0/e1000e.rom, id 0 returns 18
->
->>> qemu_ram_alloc_shared 0000:00:01.0/e1000e.rom size 262144 max_size
->
->>> 262144 fd 18 host 0x7fcdc9400000
->
->>> cpr_find_fd 0000:00:02.0/vga.vram, id 0 returns 17
->
->>> qemu_ram_alloc_shared 0000:00:02.0/vga.vram size 67108864 max_size
->
->>> 67108864 fd 17 host 0x7fcd27e00000
->
->>> cpr_find_fd 0000:00:02.0/qxl.vrom, id 0 returns 16
->
->>> qemu_ram_alloc_shared 0000:00:02.0/qxl.vrom size 8192 max_size 8192
->
->>> fd 16 host 0x7fcdc9200000
->
->>> cpr_find_fd 0000:00:02.0/qxl.vram, id 0 returns 15
->
->>> qemu_ram_alloc_shared 0000:00:02.0/qxl.vram size 67108864 max_size
->
->>> 67108864 fd 15 host 0x7fcd23c00000
->
->>> cpr_find_fd 0000:00:02.0/qxl.rom, id 0 returns 14
->
->>> qemu_ram_alloc_shared 0000:00:02.0/qxl.rom size 65536 max_size 65536
->
->>> fd 14 host 0x7fcdc8800000
->
->>> cpr_find_fd /rom@etc/acpi/tables, id 0 returns 13
->
->>> qemu_ram_alloc_shared /rom@etc/acpi/tables size 131072 max_size
->
->>> 2097152 fd 13 host 0x7fcdc8400000
->
->>> cpr_find_fd /rom@etc/table-loader, id 0 returns 11
->
->>> qemu_ram_alloc_shared /rom@etc/table-loader size 4096 max_size 65536
->
->>> fd 11 host 0x7fcdc8200000
->
->>> cpr_find_fd /rom@etc/acpi/rsdp, id 0 returns 10
->
->>> qemu_ram_alloc_shared /rom@etc/acpi/rsdp size 4096 max_size 4096 fd
->
->>> 10 host 0x7fcd3be00000
->
->>
->
->> Looks like both vga.vram and qxl.vram are being preserved (with the same
->
->> addresses), and no incompatible ram blocks are found during migration.
->
->
->
-> Sorry, addressed are not the same, of course.  However corresponding ram
->
-> blocks do seem to be preserved and initialized.
->
->
-So far, I have not reproduced the guest driver failure.
->
->
-However, I have isolated places where new QEMU improperly writes to
->
-the qxl memory regions prior to starting the guest, by mmap'ing them
->
-readonly after cpr:
->
->
-  qemu_ram_alloc_internal()
->
-    if (reused && (strstr(name, "qxl") || strstr("name", "vga")))
->
-        ram_flags |= RAM_READONLY;
->
-    new_block = qemu_ram_alloc_from_fd(...)
->
->
-I have attached a draft fix; try it and let me know.
->
-My console window looks fine before and after cpr, using
->
--vnc $hostip:0 -vga qxl
->
->
-- Steve
-Regarding the reproduce: when I launch the buggy version with the same
-options as you, i.e. "-vnc 0.0.0.0:$port -vga qxl", and do cpr-transfer,
-my VNC client silently hangs on the target after a while.  Could it
-happen on your stand as well?  Could you try launching VM with
-"-nographic -device qxl-vga"?  That way VM's serial console is given you
-directly in the shell, so when qxl driver crashes you're still able to
-inspect the kernel messages.
-
-As for your patch, I can report that it doesn't resolve the issue as it
-is.  But I was able to track down another possible memory corruption
-using your approach with readonly mmap'ing:
-
->
-Program terminated with signal SIGSEGV, Segmentation fault.
->
-#0  init_qxl_ram (d=0x5638996e0e70) at ../hw/display/qxl.c:412
->
-412         d->ram->magic       = cpu_to_le32(QXL_RAM_MAGIC);
->
-[Current thread is 1 (Thread 0x7f1a4f83b480 (LWP 229798))]
->
-(gdb) bt
->
-#0  init_qxl_ram (d=0x5638996e0e70) at ../hw/display/qxl.c:412
->
-#1  0x0000563896e7f467 in qxl_realize_common (qxl=0x5638996e0e70,
->
-errp=0x7ffd3c2b8170) at ../hw/display/qxl.c:2142
->
-#2  0x0000563896e7fda1 in qxl_realize_primary (dev=0x5638996e0e70,
->
-errp=0x7ffd3c2b81d0) at ../hw/display/qxl.c:2257
->
-#3  0x0000563896c7e8f2 in pci_qdev_realize (qdev=0x5638996e0e70,
->
-errp=0x7ffd3c2b8250) at ../hw/pci/pci.c:2174
->
-#4  0x00005638970eb54b in device_set_realized (obj=0x5638996e0e70,
->
-value=true, errp=0x7ffd3c2b84e0) at ../hw/core/qdev.c:494
->
-#5  0x00005638970f5e14 in property_set_bool (obj=0x5638996e0e70,
->
-v=0x5638996f3770, name=0x56389759b141 "realized", opaque=0x5638987893d0,
->
-errp=0x7ffd3c2b84e0)
->
-at ../qom/object.c:2374
->
-#6  0x00005638970f39f8 in object_property_set (obj=0x5638996e0e70,
->
-name=0x56389759b141 "realized", v=0x5638996f3770, errp=0x7ffd3c2b84e0)
->
-at ../qom/object.c:1449
->
-#7  0x00005638970f8586 in object_property_set_qobject (obj=0x5638996e0e70,
->
-name=0x56389759b141 "realized", value=0x5638996df900, errp=0x7ffd3c2b84e0)
->
-at ../qom/qom-qobject.c:28
->
-#8  0x00005638970f3d8d in object_property_set_bool (obj=0x5638996e0e70,
->
-name=0x56389759b141 "realized", value=true, errp=0x7ffd3c2b84e0)
->
-at ../qom/object.c:1519
->
-#9  0x00005638970eacb0 in qdev_realize (dev=0x5638996e0e70,
->
-bus=0x563898cf3c20, errp=0x7ffd3c2b84e0) at ../hw/core/qdev.c:276
->
-#10 0x0000563896dba675 in qdev_device_add_from_qdict (opts=0x5638996dfe50,
->
-from_json=false, errp=0x7ffd3c2b84e0) at ../system/qdev-monitor.c:714
->
-#11 0x0000563896dba721 in qdev_device_add (opts=0x563898786150,
->
-errp=0x56389855dc40 <error_fatal>) at ../system/qdev-monitor.c:733
->
-#12 0x0000563896dc48f1 in device_init_func (opaque=0x0, opts=0x563898786150,
->
-errp=0x56389855dc40 <error_fatal>) at ../system/vl.c:1207
->
-#13 0x000056389737a6cc in qemu_opts_foreach
->
-(list=0x563898427b60 <qemu_device_opts>, func=0x563896dc48ca
->
-<device_init_func>, opaque=0x0, errp=0x56389855dc40 <error_fatal>)
->
-at ../util/qemu-option.c:1135
->
-#14 0x0000563896dc89b5 in qemu_create_cli_devices () at ../system/vl.c:2745
->
-#15 0x0000563896dc8c00 in qmp_x_exit_preconfig (errp=0x56389855dc40
->
-<error_fatal>) at ../system/vl.c:2806
->
-#16 0x0000563896dcb5de in qemu_init (argc=33, argv=0x7ffd3c2b8948) at
->
-../system/vl.c:3838
->
-#17 0x0000563897297323 in main (argc=33, argv=0x7ffd3c2b8948) at
->
-../system/main.c:72
-So the attached adjusted version of your patch does seem to help.  At
-least I can't reproduce the crash on my stand.
-
-I'm wondering, could it be useful to explicitly mark all the reused
-memory regions readonly upon cpr-transfer, and then make them writable
-back again after the migration is done?  That way we will be segfaulting
-early on instead of debugging tricky memory corruptions.
-
-Andrey
-0001-hw-qxl-cpr-support-preliminary.patch
-Description:
-Text Data
-
-On 3/5/2025 11:50 AM, Andrey Drobyshev wrote:
-On 3/4/25 9:05 PM, Steven Sistare wrote:
-On 2/28/2025 1:37 PM, Andrey Drobyshev wrote:
-On 2/28/25 8:35 PM, Andrey Drobyshev wrote:
-On 2/28/25 8:20 PM, Steven Sistare wrote:
-On 2/28/2025 1:13 PM, Steven Sistare wrote:
-On 2/28/2025 12:39 PM, Andrey Drobyshev wrote:
-Hi all,
-
-We've been experimenting with cpr-transfer migration mode recently
-and
-have discovered the following issue with the guest QXL driver:
-
-Run migration source:
-EMULATOR=/path/to/emulator
-ROOTFS=/path/to/image
-QMPSOCK=/var/run/alma8qmp-src.sock
-
-$EMULATOR -enable-kvm \
-       -machine q35 \
-       -cpu host -smp 2 -m 2G \
-       -object memory-backend-file,id=ram0,size=2G,mem-path=/dev/shm/
-ram0,share=on\
-       -machine memory-backend=ram0 \
-       -machine aux-ram-share=on \
-       -drive file=$ROOTFS,media=disk,if=virtio \
-       -qmp unix:$QMPSOCK,server=on,wait=off \
-       -nographic \
-       -device qxl-vga
-Run migration target:
-EMULATOR=/path/to/emulator
-ROOTFS=/path/to/image
-QMPSOCK=/var/run/alma8qmp-dst.sock
-$EMULATOR -enable-kvm \
-       -machine q35 \
-       -cpu host -smp 2 -m 2G \
-       -object memory-backend-file,id=ram0,size=2G,mem-path=/dev/shm/
-ram0,share=on\
-       -machine memory-backend=ram0 \
-       -machine aux-ram-share=on \
-       -drive file=$ROOTFS,media=disk,if=virtio \
-       -qmp unix:$QMPSOCK,server=on,wait=off \
-       -nographic \
-       -device qxl-vga \
-       -incoming tcp:0:44444 \
-       -incoming '{"channel-type": "cpr", "addr": { "transport":
-"socket", "type": "unix", "path": "/var/run/alma8cpr-dst.sock"}}'
-Launch the migration:
-QMPSHELL=/root/src/qemu/master/scripts/qmp/qmp-shell
-QMPSOCK=/var/run/alma8qmp-src.sock
-
-$QMPSHELL -p $QMPSOCK <<EOF
-       migrate-set-parameters mode=cpr-transfer
-       migrate channels=[{"channel-type":"main","addr":
-{"transport":"socket","type":"inet","host":"0","port":"44444"}},
-{"channel-type":"cpr","addr":
-{"transport":"socket","type":"unix","path":"/var/run/alma8cpr-
-dst.sock"}}]
-EOF
-Then, after a while, QXL guest driver on target crashes spewing the
-following messages:
-[   73.962002] [TTM] Buffer eviction failed
-[   73.962072] qxl 0000:00:02.0: object_init failed for (3149824,
-0x00000001)
-[   73.962081] [drm:qxl_alloc_bo_reserved [qxl]] *ERROR* failed to
-allocate VRAM BO
-That seems to be a known kernel QXL driver bug:
-https://lore.kernel.org/all/20220907094423.93581-1-
-min_halo@163.com/T/
-https://lore.kernel.org/lkml/ZTgydqRlK6WX_b29@eldamar.lan/
-(the latter discussion contains that reproduce script which speeds up
-the crash in the guest):
-#!/bin/bash
-
-chvt 3
-
-for j in $(seq 80); do
-           echo "$(date) starting round $j"
-           if [ "$(journalctl --boot | grep "failed to allocate VRAM
-BO")" != "" ]; then
-                   echo "bug was reproduced after $j tries"
-                   exit 1
-           fi
-           for i in $(seq 100); do
-                   dmesg > /dev/tty3
-           done
-done
-
-echo "bug could not be reproduced"
-exit 0
-The bug itself seems to remain unfixed, as I was able to reproduce
-that
-with Fedora 41 guest, as well as AlmaLinux 8 guest. However our
-cpr-transfer code also seems to be buggy as it triggers the crash -
-without the cpr-transfer migration the above reproduce doesn't
-lead to
-crash on the source VM.
-
-I suspect that, as cpr-transfer doesn't migrate the guest memory, but
-rather passes it through the memory backend object, our code might
-somehow corrupt the VRAM.  However, I wasn't able to trace the
-corruption so far.
-
-Could somebody help the investigation and take a look into this?  Any
-suggestions would be appreciated.  Thanks!
-Possibly some memory region created by qxl is not being preserved.
-Try adding these traces to see what is preserved:
-
--trace enable='*cpr*'
--trace enable='*ram_alloc*'
-Also try adding this patch to see if it flags any ram blocks as not
-compatible with cpr.  A message is printed at migration start time.
-  Â
-https://lore.kernel.org/qemu-devel/1740667681-257312-1-git-send-
-email-
-steven.sistare@oracle.com/
-
-- Steve
-With the traces enabled + the "migration: ram block cpr blockers" patch
-applied:
-
-Source:
-cpr_find_fd pc.bios, id 0 returns -1
-cpr_save_fd pc.bios, id 0, fd 22
-qemu_ram_alloc_shared pc.bios size 262144 max_size 262144 fd 22 host
-0x7fec18e00000
-cpr_find_fd pc.rom, id 0 returns -1
-cpr_save_fd pc.rom, id 0, fd 23
-qemu_ram_alloc_shared pc.rom size 131072 max_size 131072 fd 23 host
-0x7fec18c00000
-cpr_find_fd 0000:00:01.0/e1000e.rom, id 0 returns -1
-cpr_save_fd 0000:00:01.0/e1000e.rom, id 0, fd 24
-qemu_ram_alloc_shared 0000:00:01.0/e1000e.rom size 262144 max_size
-262144 fd 24 host 0x7fec18a00000
-cpr_find_fd 0000:00:02.0/vga.vram, id 0 returns -1
-cpr_save_fd 0000:00:02.0/vga.vram, id 0, fd 25
-qemu_ram_alloc_shared 0000:00:02.0/vga.vram size 67108864 max_size
-67108864 fd 25 host 0x7feb77e00000
-cpr_find_fd 0000:00:02.0/qxl.vrom, id 0 returns -1
-cpr_save_fd 0000:00:02.0/qxl.vrom, id 0, fd 27
-qemu_ram_alloc_shared 0000:00:02.0/qxl.vrom size 8192 max_size 8192
-fd 27 host 0x7fec18800000
-cpr_find_fd 0000:00:02.0/qxl.vram, id 0 returns -1
-cpr_save_fd 0000:00:02.0/qxl.vram, id 0, fd 28
-qemu_ram_alloc_shared 0000:00:02.0/qxl.vram size 67108864 max_size
-67108864 fd 28 host 0x7feb73c00000
-cpr_find_fd 0000:00:02.0/qxl.rom, id 0 returns -1
-cpr_save_fd 0000:00:02.0/qxl.rom, id 0, fd 34
-qemu_ram_alloc_shared 0000:00:02.0/qxl.rom size 65536 max_size 65536
-fd 34 host 0x7fec18600000
-cpr_find_fd /rom@etc/acpi/tables, id 0 returns -1
-cpr_save_fd /rom@etc/acpi/tables, id 0, fd 35
-qemu_ram_alloc_shared /rom@etc/acpi/tables size 131072 max_size
-2097152 fd 35 host 0x7fec18200000
-cpr_find_fd /rom@etc/table-loader, id 0 returns -1
-cpr_save_fd /rom@etc/table-loader, id 0, fd 36
-qemu_ram_alloc_shared /rom@etc/table-loader size 4096 max_size 65536
-fd 36 host 0x7feb8b600000
-cpr_find_fd /rom@etc/acpi/rsdp, id 0 returns -1
-cpr_save_fd /rom@etc/acpi/rsdp, id 0, fd 37
-qemu_ram_alloc_shared /rom@etc/acpi/rsdp size 4096 max_size 4096 fd
-37 host 0x7feb8b400000
-
-cpr_state_save cpr-transfer mode
-cpr_transfer_output /var/run/alma8cpr-dst.sock
-Target:
-cpr_transfer_input /var/run/alma8cpr-dst.sock
-cpr_state_load cpr-transfer mode
-cpr_find_fd pc.bios, id 0 returns 20
-qemu_ram_alloc_shared pc.bios size 262144 max_size 262144 fd 20 host
-0x7fcdc9800000
-cpr_find_fd pc.rom, id 0 returns 19
-qemu_ram_alloc_shared pc.rom size 131072 max_size 131072 fd 19 host
-0x7fcdc9600000
-cpr_find_fd 0000:00:01.0/e1000e.rom, id 0 returns 18
-qemu_ram_alloc_shared 0000:00:01.0/e1000e.rom size 262144 max_size
-262144 fd 18 host 0x7fcdc9400000
-cpr_find_fd 0000:00:02.0/vga.vram, id 0 returns 17
-qemu_ram_alloc_shared 0000:00:02.0/vga.vram size 67108864 max_size
-67108864 fd 17 host 0x7fcd27e00000
-cpr_find_fd 0000:00:02.0/qxl.vrom, id 0 returns 16
-qemu_ram_alloc_shared 0000:00:02.0/qxl.vrom size 8192 max_size 8192
-fd 16 host 0x7fcdc9200000
-cpr_find_fd 0000:00:02.0/qxl.vram, id 0 returns 15
-qemu_ram_alloc_shared 0000:00:02.0/qxl.vram size 67108864 max_size
-67108864 fd 15 host 0x7fcd23c00000
-cpr_find_fd 0000:00:02.0/qxl.rom, id 0 returns 14
-qemu_ram_alloc_shared 0000:00:02.0/qxl.rom size 65536 max_size 65536
-fd 14 host 0x7fcdc8800000
-cpr_find_fd /rom@etc/acpi/tables, id 0 returns 13
-qemu_ram_alloc_shared /rom@etc/acpi/tables size 131072 max_size
-2097152 fd 13 host 0x7fcdc8400000
-cpr_find_fd /rom@etc/table-loader, id 0 returns 11
-qemu_ram_alloc_shared /rom@etc/table-loader size 4096 max_size 65536
-fd 11 host 0x7fcdc8200000
-cpr_find_fd /rom@etc/acpi/rsdp, id 0 returns 10
-qemu_ram_alloc_shared /rom@etc/acpi/rsdp size 4096 max_size 4096 fd
-10 host 0x7fcd3be00000
-Looks like both vga.vram and qxl.vram are being preserved (with the same
-addresses), and no incompatible ram blocks are found during migration.
-Sorry, addressed are not the same, of course.  However corresponding ram
-blocks do seem to be preserved and initialized.
-So far, I have not reproduced the guest driver failure.
-
-However, I have isolated places where new QEMU improperly writes to
-the qxl memory regions prior to starting the guest, by mmap'ing them
-readonly after cpr:
-
-   qemu_ram_alloc_internal()
-     if (reused && (strstr(name, "qxl") || strstr("name", "vga")))
-         ram_flags |= RAM_READONLY;
-     new_block = qemu_ram_alloc_from_fd(...)
-
-I have attached a draft fix; try it and let me know.
-My console window looks fine before and after cpr, using
--vnc $hostip:0 -vga qxl
-
-- Steve
-Regarding the reproduce: when I launch the buggy version with the same
-options as you, i.e. "-vnc 0.0.0.0:$port -vga qxl", and do cpr-transfer,
-my VNC client silently hangs on the target after a while.  Could it
-happen on your stand as well?
-cpr does not preserve the vnc connection and session.  To test, I specify
-port 0 for the source VM and port 1 for the dest.  When the src vnc goes
-dormant the dest vnc becomes active.
-Could you try launching VM with
-"-nographic -device qxl-vga"?  That way VM's serial console is given you
-directly in the shell, so when qxl driver crashes you're still able to
-inspect the kernel messages.
-I have been running like that, but have not reproduced the qxl driver crash,
-and I suspect my guest image+kernel is too old.  However, once I realized the
-issue was post-cpr modification of qxl memory, I switched my attention to the
-fix.
-As for your patch, I can report that it doesn't resolve the issue as it
-is.  But I was able to track down another possible memory corruption
-using your approach with readonly mmap'ing:
-Program terminated with signal SIGSEGV, Segmentation fault.
-#0  init_qxl_ram (d=0x5638996e0e70) at ../hw/display/qxl.c:412
-412         d->ram->magic       = cpu_to_le32(QXL_RAM_MAGIC);
-[Current thread is 1 (Thread 0x7f1a4f83b480 (LWP 229798))]
-(gdb) bt
-#0  init_qxl_ram (d=0x5638996e0e70) at ../hw/display/qxl.c:412
-#1  0x0000563896e7f467 in qxl_realize_common (qxl=0x5638996e0e70, 
-errp=0x7ffd3c2b8170) at ../hw/display/qxl.c:2142
-#2  0x0000563896e7fda1 in qxl_realize_primary (dev=0x5638996e0e70, 
-errp=0x7ffd3c2b81d0) at ../hw/display/qxl.c:2257
-#3  0x0000563896c7e8f2 in pci_qdev_realize (qdev=0x5638996e0e70, 
-errp=0x7ffd3c2b8250) at ../hw/pci/pci.c:2174
-#4  0x00005638970eb54b in device_set_realized (obj=0x5638996e0e70, value=true, 
-errp=0x7ffd3c2b84e0) at ../hw/core/qdev.c:494
-#5  0x00005638970f5e14 in property_set_bool (obj=0x5638996e0e70, v=0x5638996f3770, 
-name=0x56389759b141 "realized", opaque=0x5638987893d0, errp=0x7ffd3c2b84e0)
-     at ../qom/object.c:2374
-#6  0x00005638970f39f8 in object_property_set (obj=0x5638996e0e70, name=0x56389759b141 
-"realized", v=0x5638996f3770, errp=0x7ffd3c2b84e0)
-     at ../qom/object.c:1449
-#7  0x00005638970f8586 in object_property_set_qobject (obj=0x5638996e0e70, 
-name=0x56389759b141 "realized", value=0x5638996df900, errp=0x7ffd3c2b84e0)
-     at ../qom/qom-qobject.c:28
-#8  0x00005638970f3d8d in object_property_set_bool (obj=0x5638996e0e70, 
-name=0x56389759b141 "realized", value=true, errp=0x7ffd3c2b84e0)
-     at ../qom/object.c:1519
-#9  0x00005638970eacb0 in qdev_realize (dev=0x5638996e0e70, bus=0x563898cf3c20, 
-errp=0x7ffd3c2b84e0) at ../hw/core/qdev.c:276
-#10 0x0000563896dba675 in qdev_device_add_from_qdict (opts=0x5638996dfe50, 
-from_json=false, errp=0x7ffd3c2b84e0) at ../system/qdev-monitor.c:714
-#11 0x0000563896dba721 in qdev_device_add (opts=0x563898786150, errp=0x56389855dc40 
-<error_fatal>) at ../system/qdev-monitor.c:733
-#12 0x0000563896dc48f1 in device_init_func (opaque=0x0, opts=0x563898786150, 
-errp=0x56389855dc40 <error_fatal>) at ../system/vl.c:1207
-#13 0x000056389737a6cc in qemu_opts_foreach
-     (list=0x563898427b60 <qemu_device_opts>, func=0x563896dc48ca <device_init_func>, 
-opaque=0x0, errp=0x56389855dc40 <error_fatal>)
-     at ../util/qemu-option.c:1135
-#14 0x0000563896dc89b5 in qemu_create_cli_devices () at ../system/vl.c:2745
-#15 0x0000563896dc8c00 in qmp_x_exit_preconfig (errp=0x56389855dc40 
-<error_fatal>) at ../system/vl.c:2806
-#16 0x0000563896dcb5de in qemu_init (argc=33, argv=0x7ffd3c2b8948) at 
-../system/vl.c:3838
-#17 0x0000563897297323 in main (argc=33, argv=0x7ffd3c2b8948) at 
-../system/main.c:72
-So the attached adjusted version of your patch does seem to help.  At
-least I can't reproduce the crash on my stand.
-Thanks for the stack trace; the calls to SPICE_RING_INIT in init_qxl_ram are
-definitely harmful.  Try V2 of the patch, attached, which skips the lines
-of init_qxl_ram that modify guest memory.
-I'm wondering, could it be useful to explicitly mark all the reused
-memory regions readonly upon cpr-transfer, and then make them writable
-back again after the migration is done?  That way we will be segfaulting
-early on instead of debugging tricky memory corruptions.
-It's a useful debugging technique, but changing protection on a large memory 
-region
-can be too expensive for production due to TLB shootdowns.
-
-Also, there are cases where writes are performed but the value is guaranteed to
-be the same:
-  qxl_post_load()
-    qxl_set_mode()
-      d->rom->mode = cpu_to_le32(modenr);
-The value is the same because mode and shadow_rom.mode were passed in vmstate
-from old qemu.
-
-- Steve
-0001-hw-qxl-cpr-support-preliminary-V2.patch
-Description:
-Text document
-
-On 3/5/25 22:19, Steven Sistare wrote:
-On 3/5/2025 11:50 AM, Andrey Drobyshev wrote:
-On 3/4/25 9:05 PM, Steven Sistare wrote:
-On 2/28/2025 1:37 PM, Andrey Drobyshev wrote:
-On 2/28/25 8:35 PM, Andrey Drobyshev wrote:
-On 2/28/25 8:20 PM, Steven Sistare wrote:
-On 2/28/2025 1:13 PM, Steven Sistare wrote:
-On 2/28/2025 12:39 PM, Andrey Drobyshev wrote:
-Hi all,
-
-We've been experimenting with cpr-transfer migration mode recently
-and
-have discovered the following issue with the guest QXL driver:
-
-Run migration source:
-EMULATOR=/path/to/emulator
-ROOTFS=/path/to/image
-QMPSOCK=/var/run/alma8qmp-src.sock
-
-$EMULATOR -enable-kvm \
-       -machine q35 \
-       -cpu host -smp 2 -m 2G \
-       -object
-memory-backend-file,id=ram0,size=2G,mem-path=/dev/shm/
-ram0,share=on\
-       -machine memory-backend=ram0 \
-       -machine aux-ram-share=on \
-       -drive file=$ROOTFS,media=disk,if=virtio \
-       -qmp unix:$QMPSOCK,server=on,wait=off \
-       -nographic \
-       -device qxl-vga
-Run migration target:
-EMULATOR=/path/to/emulator
-ROOTFS=/path/to/image
-QMPSOCK=/var/run/alma8qmp-dst.sock
-$EMULATOR -enable-kvm \
-       -machine q35 \
-       -cpu host -smp 2 -m 2G \
-       -object
-memory-backend-file,id=ram0,size=2G,mem-path=/dev/shm/
-ram0,share=on\
-       -machine memory-backend=ram0 \
-       -machine aux-ram-share=on \
-       -drive file=$ROOTFS,media=disk,if=virtio \
-       -qmp unix:$QMPSOCK,server=on,wait=off \
-       -nographic \
-       -device qxl-vga \
-       -incoming tcp:0:44444 \
-       -incoming '{"channel-type": "cpr", "addr": { "transport":
-"socket", "type": "unix", "path": "/var/run/alma8cpr-dst.sock"}}'
-Launch the migration:
-QMPSHELL=/root/src/qemu/master/scripts/qmp/qmp-shell
-QMPSOCK=/var/run/alma8qmp-src.sock
-
-$QMPSHELL -p $QMPSOCK <<EOF
-       migrate-set-parameters mode=cpr-transfer
-       migrate channels=[{"channel-type":"main","addr":
-{"transport":"socket","type":"inet","host":"0","port":"44444"}},
-{"channel-type":"cpr","addr":
-{"transport":"socket","type":"unix","path":"/var/run/alma8cpr-
-dst.sock"}}]
-EOF
-Then, after a while, QXL guest driver on target crashes spewing
-the
-following messages:
-[   73.962002] [TTM] Buffer eviction failed
-[   73.962072] qxl 0000:00:02.0: object_init failed for (3149824,
-0x00000001)
-[   73.962081] [drm:qxl_alloc_bo_reserved [qxl]] *ERROR*
-failed to
-allocate VRAM BO
-That seems to be a known kernel QXL driver bug:
-https://lore.kernel.org/all/20220907094423.93581-1-
-min_halo@163.com/T/
-https://lore.kernel.org/lkml/ZTgydqRlK6WX_b29@eldamar.lan/
-(the latter discussion contains that reproduce script which
-speeds up
-the crash in the guest):
-#!/bin/bash
-
-chvt 3
-
-for j in $(seq 80); do
-           echo "$(date) starting round $j"
-           if [ "$(journalctl --boot | grep "failed to
-allocate VRAM
-BO")" != "" ]; then
-                   echo "bug was reproduced after $j tries"
-                   exit 1
-           fi
-           for i in $(seq 100); do
-                   dmesg > /dev/tty3
-           done
-done
-
-echo "bug could not be reproduced"
-exit 0
-The bug itself seems to remain unfixed, as I was able to reproduce
-that
-with Fedora 41 guest, as well as AlmaLinux 8 guest. However our
-cpr-transfer code also seems to be buggy as it triggers the
-crash -
-without the cpr-transfer migration the above reproduce doesn't
-lead to
-crash on the source VM.
-I suspect that, as cpr-transfer doesn't migrate the guest
-memory, but
-rather passes it through the memory backend object, our code might
-somehow corrupt the VRAM.  However, I wasn't able to trace the
-corruption so far.
-Could somebody help the investigation and take a look into
-this?  Any
-suggestions would be appreciated.  Thanks!
-Possibly some memory region created by qxl is not being preserved.
-Try adding these traces to see what is preserved:
-
--trace enable='*cpr*'
--trace enable='*ram_alloc*'
-Also try adding this patch to see if it flags any ram blocks as not
-compatible with cpr.  A message is printed at migration start time.
-https://lore.kernel.org/qemu-devel/1740667681-257312-1-git-send-
-email-
-steven.sistare@oracle.com/
-
-- Steve
-With the traces enabled + the "migration: ram block cpr blockers"
-patch
-applied:
-
-Source:
-cpr_find_fd pc.bios, id 0 returns -1
-cpr_save_fd pc.bios, id 0, fd 22
-qemu_ram_alloc_shared pc.bios size 262144 max_size 262144 fd 22 host
-0x7fec18e00000
-cpr_find_fd pc.rom, id 0 returns -1
-cpr_save_fd pc.rom, id 0, fd 23
-qemu_ram_alloc_shared pc.rom size 131072 max_size 131072 fd 23 host
-0x7fec18c00000
-cpr_find_fd 0000:00:01.0/e1000e.rom, id 0 returns -1
-cpr_save_fd 0000:00:01.0/e1000e.rom, id 0, fd 24
-qemu_ram_alloc_shared 0000:00:01.0/e1000e.rom size 262144 max_size
-262144 fd 24 host 0x7fec18a00000
-cpr_find_fd 0000:00:02.0/vga.vram, id 0 returns -1
-cpr_save_fd 0000:00:02.0/vga.vram, id 0, fd 25
-qemu_ram_alloc_shared 0000:00:02.0/vga.vram size 67108864 max_size
-67108864 fd 25 host 0x7feb77e00000
-cpr_find_fd 0000:00:02.0/qxl.vrom, id 0 returns -1
-cpr_save_fd 0000:00:02.0/qxl.vrom, id 0, fd 27
-qemu_ram_alloc_shared 0000:00:02.0/qxl.vrom size 8192 max_size 8192
-fd 27 host 0x7fec18800000
-cpr_find_fd 0000:00:02.0/qxl.vram, id 0 returns -1
-cpr_save_fd 0000:00:02.0/qxl.vram, id 0, fd 28
-qemu_ram_alloc_shared 0000:00:02.0/qxl.vram size 67108864 max_size
-67108864 fd 28 host 0x7feb73c00000
-cpr_find_fd 0000:00:02.0/qxl.rom, id 0 returns -1
-cpr_save_fd 0000:00:02.0/qxl.rom, id 0, fd 34
-qemu_ram_alloc_shared 0000:00:02.0/qxl.rom size 65536 max_size 65536
-fd 34 host 0x7fec18600000
-cpr_find_fd /rom@etc/acpi/tables, id 0 returns -1
-cpr_save_fd /rom@etc/acpi/tables, id 0, fd 35
-qemu_ram_alloc_shared /rom@etc/acpi/tables size 131072 max_size
-2097152 fd 35 host 0x7fec18200000
-cpr_find_fd /rom@etc/table-loader, id 0 returns -1
-cpr_save_fd /rom@etc/table-loader, id 0, fd 36
-qemu_ram_alloc_shared /rom@etc/table-loader size 4096 max_size 65536
-fd 36 host 0x7feb8b600000
-cpr_find_fd /rom@etc/acpi/rsdp, id 0 returns -1
-cpr_save_fd /rom@etc/acpi/rsdp, id 0, fd 37
-qemu_ram_alloc_shared /rom@etc/acpi/rsdp size 4096 max_size 4096 fd
-37 host 0x7feb8b400000
-
-cpr_state_save cpr-transfer mode
-cpr_transfer_output /var/run/alma8cpr-dst.sock
-Target:
-cpr_transfer_input /var/run/alma8cpr-dst.sock
-cpr_state_load cpr-transfer mode
-cpr_find_fd pc.bios, id 0 returns 20
-qemu_ram_alloc_shared pc.bios size 262144 max_size 262144 fd 20 host
-0x7fcdc9800000
-cpr_find_fd pc.rom, id 0 returns 19
-qemu_ram_alloc_shared pc.rom size 131072 max_size 131072 fd 19 host
-0x7fcdc9600000
-cpr_find_fd 0000:00:01.0/e1000e.rom, id 0 returns 18
-qemu_ram_alloc_shared 0000:00:01.0/e1000e.rom size 262144 max_size
-262144 fd 18 host 0x7fcdc9400000
-cpr_find_fd 0000:00:02.0/vga.vram, id 0 returns 17
-qemu_ram_alloc_shared 0000:00:02.0/vga.vram size 67108864 max_size
-67108864 fd 17 host 0x7fcd27e00000
-cpr_find_fd 0000:00:02.0/qxl.vrom, id 0 returns 16
-qemu_ram_alloc_shared 0000:00:02.0/qxl.vrom size 8192 max_size 8192
-fd 16 host 0x7fcdc9200000
-cpr_find_fd 0000:00:02.0/qxl.vram, id 0 returns 15
-qemu_ram_alloc_shared 0000:00:02.0/qxl.vram size 67108864 max_size
-67108864 fd 15 host 0x7fcd23c00000
-cpr_find_fd 0000:00:02.0/qxl.rom, id 0 returns 14
-qemu_ram_alloc_shared 0000:00:02.0/qxl.rom size 65536 max_size 65536
-fd 14 host 0x7fcdc8800000
-cpr_find_fd /rom@etc/acpi/tables, id 0 returns 13
-qemu_ram_alloc_shared /rom@etc/acpi/tables size 131072 max_size
-2097152 fd 13 host 0x7fcdc8400000
-cpr_find_fd /rom@etc/table-loader, id 0 returns 11
-qemu_ram_alloc_shared /rom@etc/table-loader size 4096 max_size 65536
-fd 11 host 0x7fcdc8200000
-cpr_find_fd /rom@etc/acpi/rsdp, id 0 returns 10
-qemu_ram_alloc_shared /rom@etc/acpi/rsdp size 4096 max_size 4096 fd
-10 host 0x7fcd3be00000
-Looks like both vga.vram and qxl.vram are being preserved (with
-the same
-addresses), and no incompatible ram blocks are found during
-migration.
-Sorry, addressed are not the same, of course.  However
-corresponding ram
-blocks do seem to be preserved and initialized.
-So far, I have not reproduced the guest driver failure.
-
-However, I have isolated places where new QEMU improperly writes to
-the qxl memory regions prior to starting the guest, by mmap'ing them
-readonly after cpr:
-
-   qemu_ram_alloc_internal()
-     if (reused && (strstr(name, "qxl") || strstr("name", "vga")))
-         ram_flags |= RAM_READONLY;
-     new_block = qemu_ram_alloc_from_fd(...)
-
-I have attached a draft fix; try it and let me know.
-My console window looks fine before and after cpr, using
--vnc $hostip:0 -vga qxl
-
-- Steve
-Regarding the reproduce: when I launch the buggy version with the same
-options as you, i.e. "-vnc 0.0.0.0:$port -vga qxl", and do cpr-transfer,
-my VNC client silently hangs on the target after a while.  Could it
-happen on your stand as well?
-cpr does not preserve the vnc connection and session.  To test, I specify
-port 0 for the source VM and port 1 for the dest.  When the src vnc goes
-dormant the dest vnc becomes active.
-Could you try launching VM with
-"-nographic -device qxl-vga"?  That way VM's serial console is given you
-directly in the shell, so when qxl driver crashes you're still able to
-inspect the kernel messages.
-I have been running like that, but have not reproduced the qxl driver
-crash,
-and I suspect my guest image+kernel is too old.  However, once I
-realized the
-issue was post-cpr modification of qxl memory, I switched my attention
-to the
-fix.
-As for your patch, I can report that it doesn't resolve the issue as it
-is.  But I was able to track down another possible memory corruption
-using your approach with readonly mmap'ing:
-Program terminated with signal SIGSEGV, Segmentation fault.
-#0  init_qxl_ram (d=0x5638996e0e70) at ../hw/display/qxl.c:412
-412         d->ram->magic       = cpu_to_le32(QXL_RAM_MAGIC);
-[Current thread is 1 (Thread 0x7f1a4f83b480 (LWP 229798))]
-(gdb) bt
-#0  init_qxl_ram (d=0x5638996e0e70) at ../hw/display/qxl.c:412
-#1  0x0000563896e7f467 in qxl_realize_common (qxl=0x5638996e0e70,
-errp=0x7ffd3c2b8170) at ../hw/display/qxl.c:2142
-#2  0x0000563896e7fda1 in qxl_realize_primary (dev=0x5638996e0e70,
-errp=0x7ffd3c2b81d0) at ../hw/display/qxl.c:2257
-#3  0x0000563896c7e8f2 in pci_qdev_realize (qdev=0x5638996e0e70,
-errp=0x7ffd3c2b8250) at ../hw/pci/pci.c:2174
-#4  0x00005638970eb54b in device_set_realized (obj=0x5638996e0e70,
-value=true, errp=0x7ffd3c2b84e0) at ../hw/core/qdev.c:494
-#5  0x00005638970f5e14 in property_set_bool (obj=0x5638996e0e70,
-v=0x5638996f3770, name=0x56389759b141 "realized",
-opaque=0x5638987893d0, errp=0x7ffd3c2b84e0)
-     at ../qom/object.c:2374
-#6  0x00005638970f39f8 in object_property_set (obj=0x5638996e0e70,
-name=0x56389759b141 "realized", v=0x5638996f3770, errp=0x7ffd3c2b84e0)
-     at ../qom/object.c:1449
-#7  0x00005638970f8586 in object_property_set_qobject
-(obj=0x5638996e0e70, name=0x56389759b141 "realized",
-value=0x5638996df900, errp=0x7ffd3c2b84e0)
-     at ../qom/qom-qobject.c:28
-#8  0x00005638970f3d8d in object_property_set_bool
-(obj=0x5638996e0e70, name=0x56389759b141 "realized", value=true,
-errp=0x7ffd3c2b84e0)
-     at ../qom/object.c:1519
-#9  0x00005638970eacb0 in qdev_realize (dev=0x5638996e0e70,
-bus=0x563898cf3c20, errp=0x7ffd3c2b84e0) at ../hw/core/qdev.c:276
-#10 0x0000563896dba675 in qdev_device_add_from_qdict
-(opts=0x5638996dfe50, from_json=false, errp=0x7ffd3c2b84e0) at
-../system/qdev-monitor.c:714
-#11 0x0000563896dba721 in qdev_device_add (opts=0x563898786150,
-errp=0x56389855dc40 <error_fatal>) at ../system/qdev-monitor.c:733
-#12 0x0000563896dc48f1 in device_init_func (opaque=0x0,
-opts=0x563898786150, errp=0x56389855dc40 <error_fatal>) at
-../system/vl.c:1207
-#13 0x000056389737a6cc in qemu_opts_foreach
-     (list=0x563898427b60 <qemu_device_opts>, func=0x563896dc48ca
-<device_init_func>, opaque=0x0, errp=0x56389855dc40 <error_fatal>)
-     at ../util/qemu-option.c:1135
-#14 0x0000563896dc89b5 in qemu_create_cli_devices () at
-../system/vl.c:2745
-#15 0x0000563896dc8c00 in qmp_x_exit_preconfig (errp=0x56389855dc40
-<error_fatal>) at ../system/vl.c:2806
-#16 0x0000563896dcb5de in qemu_init (argc=33, argv=0x7ffd3c2b8948)
-at ../system/vl.c:3838
-#17 0x0000563897297323 in main (argc=33, argv=0x7ffd3c2b8948) at
-../system/main.c:72
-So the attached adjusted version of your patch does seem to help.  At
-least I can't reproduce the crash on my stand.
-Thanks for the stack trace; the calls to SPICE_RING_INIT in
-init_qxl_ram are
-definitely harmful.  Try V2 of the patch, attached, which skips the lines
-of init_qxl_ram that modify guest memory.
-I'm wondering, could it be useful to explicitly mark all the reused
-memory regions readonly upon cpr-transfer, and then make them writable
-back again after the migration is done?  That way we will be segfaulting
-early on instead of debugging tricky memory corruptions.
-It's a useful debugging technique, but changing protection on a large
-memory region
-can be too expensive for production due to TLB shootdowns.
-Good point. Though we could move this code under non-default option to
-avoid re-writing.
-
-Den
-
-On 3/5/25 11:19 PM, Steven Sistare wrote:
->
-On 3/5/2025 11:50 AM, Andrey Drobyshev wrote:
->
-> On 3/4/25 9:05 PM, Steven Sistare wrote:
->
->> On 2/28/2025 1:37 PM, Andrey Drobyshev wrote:
->
->>> On 2/28/25 8:35 PM, Andrey Drobyshev wrote:
->
->>>> On 2/28/25 8:20 PM, Steven Sistare wrote:
->
->>>>> On 2/28/2025 1:13 PM, Steven Sistare wrote:
->
->>>>>> On 2/28/2025 12:39 PM, Andrey Drobyshev wrote:
->
->>>>>>> Hi all,
->
->>>>>>>
->
->>>>>>> We've been experimenting with cpr-transfer migration mode recently
->
->>>>>>> and
->
->>>>>>> have discovered the following issue with the guest QXL driver:
->
->>>>>>>
->
->>>>>>> Run migration source:
->
->>>>>>>> EMULATOR=/path/to/emulator
->
->>>>>>>> ROOTFS=/path/to/image
->
->>>>>>>> QMPSOCK=/var/run/alma8qmp-src.sock
->
->>>>>>>>
->
->>>>>>>> $EMULATOR -enable-kvm \
->
->>>>>>>>        -machine q35 \
->
->>>>>>>>        -cpu host -smp 2 -m 2G \
->
->>>>>>>>        -object memory-backend-file,id=ram0,size=2G,mem-path=/
->
->>>>>>>> dev/shm/
->
->>>>>>>> ram0,share=on\
->
->>>>>>>>        -machine memory-backend=ram0 \
->
->>>>>>>>        -machine aux-ram-share=on \
->
->>>>>>>>        -drive file=$ROOTFS,media=disk,if=virtio \
->
->>>>>>>>        -qmp unix:$QMPSOCK,server=on,wait=off \
->
->>>>>>>>        -nographic \
->
->>>>>>>>        -device qxl-vga
->
->>>>>>>
->
->>>>>>> Run migration target:
->
->>>>>>>> EMULATOR=/path/to/emulator
->
->>>>>>>> ROOTFS=/path/to/image
->
->>>>>>>> QMPSOCK=/var/run/alma8qmp-dst.sock
->
->>>>>>>> $EMULATOR -enable-kvm \
->
->>>>>>>>        -machine q35 \
->
->>>>>>>>        -cpu host -smp 2 -m 2G \
->
->>>>>>>>        -object memory-backend-file,id=ram0,size=2G,mem-path=/
->
->>>>>>>> dev/shm/
->
->>>>>>>> ram0,share=on\
->
->>>>>>>>        -machine memory-backend=ram0 \
->
->>>>>>>>        -machine aux-ram-share=on \
->
->>>>>>>>        -drive file=$ROOTFS,media=disk,if=virtio \
->
->>>>>>>>        -qmp unix:$QMPSOCK,server=on,wait=off \
->
->>>>>>>>        -nographic \
->
->>>>>>>>        -device qxl-vga \
->
->>>>>>>>        -incoming tcp:0:44444 \
->
->>>>>>>>        -incoming '{"channel-type": "cpr", "addr": { "transport":
->
->>>>>>>> "socket", "type": "unix", "path": "/var/run/alma8cpr-dst.sock"}}'
->
->>>>>>>
->
->>>>>>>
->
->>>>>>> Launch the migration:
->
->>>>>>>> QMPSHELL=/root/src/qemu/master/scripts/qmp/qmp-shell
->
->>>>>>>> QMPSOCK=/var/run/alma8qmp-src.sock
->
->>>>>>>>
->
->>>>>>>> $QMPSHELL -p $QMPSOCK <<EOF
->
->>>>>>>>        migrate-set-parameters mode=cpr-transfer
->
->>>>>>>>        migrate channels=[{"channel-type":"main","addr":
->
->>>>>>>> {"transport":"socket","type":"inet","host":"0","port":"44444"}},
->
->>>>>>>> {"channel-type":"cpr","addr":
->
->>>>>>>> {"transport":"socket","type":"unix","path":"/var/run/alma8cpr-
->
->>>>>>>> dst.sock"}}]
->
->>>>>>>> EOF
->
->>>>>>>
->
->>>>>>> Then, after a while, QXL guest driver on target crashes spewing the
->
->>>>>>> following messages:
->
->>>>>>>> [   73.962002] [TTM] Buffer eviction failed
->
->>>>>>>> [   73.962072] qxl 0000:00:02.0: object_init failed for (3149824,
->
->>>>>>>> 0x00000001)
->
->>>>>>>> [   73.962081] [drm:qxl_alloc_bo_reserved [qxl]] *ERROR* failed to
->
->>>>>>>> allocate VRAM BO
->
->>>>>>>
->
->>>>>>> That seems to be a known kernel QXL driver bug:
->
->>>>>>>
->
->>>>>>>
-https://lore.kernel.org/all/20220907094423.93581-1-
->
->>>>>>> min_halo@163.com/T/
->
->>>>>>>
-https://lore.kernel.org/lkml/ZTgydqRlK6WX_b29@eldamar.lan/
->
->>>>>>>
->
->>>>>>> (the latter discussion contains that reproduce script which
->
->>>>>>> speeds up
->
->>>>>>> the crash in the guest):
->
->>>>>>>> #!/bin/bash
->
->>>>>>>>
->
->>>>>>>> chvt 3
->
->>>>>>>>
->
->>>>>>>> for j in $(seq 80); do
->
->>>>>>>>            echo "$(date) starting round $j"
->
->>>>>>>>            if [ "$(journalctl --boot | grep "failed to allocate
->
->>>>>>>> VRAM
->
->>>>>>>> BO")" != "" ]; then
->
->>>>>>>>                    echo "bug was reproduced after $j tries"
->
->>>>>>>>                    exit 1
->
->>>>>>>>            fi
->
->>>>>>>>            for i in $(seq 100); do
->
->>>>>>>>                    dmesg > /dev/tty3
->
->>>>>>>>            done
->
->>>>>>>> done
->
->>>>>>>>
->
->>>>>>>> echo "bug could not be reproduced"
->
->>>>>>>> exit 0
->
->>>>>>>
->
->>>>>>> The bug itself seems to remain unfixed, as I was able to reproduce
->
->>>>>>> that
->
->>>>>>> with Fedora 41 guest, as well as AlmaLinux 8 guest. However our
->
->>>>>>> cpr-transfer code also seems to be buggy as it triggers the crash -
->
->>>>>>> without the cpr-transfer migration the above reproduce doesn't
->
->>>>>>> lead to
->
->>>>>>> crash on the source VM.
->
->>>>>>>
->
->>>>>>> I suspect that, as cpr-transfer doesn't migrate the guest
->
->>>>>>> memory, but
->
->>>>>>> rather passes it through the memory backend object, our code might
->
->>>>>>> somehow corrupt the VRAM.  However, I wasn't able to trace the
->
->>>>>>> corruption so far.
->
->>>>>>>
->
->>>>>>> Could somebody help the investigation and take a look into
->
->>>>>>> this?  Any
->
->>>>>>> suggestions would be appreciated.  Thanks!
->
->>>>>>
->
->>>>>> Possibly some memory region created by qxl is not being preserved.
->
->>>>>> Try adding these traces to see what is preserved:
->
->>>>>>
->
->>>>>> -trace enable='*cpr*'
->
->>>>>> -trace enable='*ram_alloc*'
->
->>>>>
->
->>>>> Also try adding this patch to see if it flags any ram blocks as not
->
->>>>> compatible with cpr.  A message is printed at migration start time.
->
->>>>>   Â
-https://lore.kernel.org/qemu-devel/1740667681-257312-1-git-send-
->
->>>>> email-
->
->>>>> steven.sistare@oracle.com/
->
->>>>>
->
->>>>> - Steve
->
->>>>>
->
->>>>
->
->>>> With the traces enabled + the "migration: ram block cpr blockers"
->
->>>> patch
->
->>>> applied:
->
->>>>
->
->>>> Source:
->
->>>>> cpr_find_fd pc.bios, id 0 returns -1
->
->>>>> cpr_save_fd pc.bios, id 0, fd 22
->
->>>>> qemu_ram_alloc_shared pc.bios size 262144 max_size 262144 fd 22 host
->
->>>>> 0x7fec18e00000
->
->>>>> cpr_find_fd pc.rom, id 0 returns -1
->
->>>>> cpr_save_fd pc.rom, id 0, fd 23
->
->>>>> qemu_ram_alloc_shared pc.rom size 131072 max_size 131072 fd 23 host
->
->>>>> 0x7fec18c00000
->
->>>>> cpr_find_fd 0000:00:01.0/e1000e.rom, id 0 returns -1
->
->>>>> cpr_save_fd 0000:00:01.0/e1000e.rom, id 0, fd 24
->
->>>>> qemu_ram_alloc_shared 0000:00:01.0/e1000e.rom size 262144 max_size
->
->>>>> 262144 fd 24 host 0x7fec18a00000
->
->>>>> cpr_find_fd 0000:00:02.0/vga.vram, id 0 returns -1
->
->>>>> cpr_save_fd 0000:00:02.0/vga.vram, id 0, fd 25
->
->>>>> qemu_ram_alloc_shared 0000:00:02.0/vga.vram size 67108864 max_size
->
->>>>> 67108864 fd 25 host 0x7feb77e00000
->
->>>>> cpr_find_fd 0000:00:02.0/qxl.vrom, id 0 returns -1
->
->>>>> cpr_save_fd 0000:00:02.0/qxl.vrom, id 0, fd 27
->
->>>>> qemu_ram_alloc_shared 0000:00:02.0/qxl.vrom size 8192 max_size 8192
->
->>>>> fd 27 host 0x7fec18800000
->
->>>>> cpr_find_fd 0000:00:02.0/qxl.vram, id 0 returns -1
->
->>>>> cpr_save_fd 0000:00:02.0/qxl.vram, id 0, fd 28
->
->>>>> qemu_ram_alloc_shared 0000:00:02.0/qxl.vram size 67108864 max_size
->
->>>>> 67108864 fd 28 host 0x7feb73c00000
->
->>>>> cpr_find_fd 0000:00:02.0/qxl.rom, id 0 returns -1
->
->>>>> cpr_save_fd 0000:00:02.0/qxl.rom, id 0, fd 34
->
->>>>> qemu_ram_alloc_shared 0000:00:02.0/qxl.rom size 65536 max_size 65536
->
->>>>> fd 34 host 0x7fec18600000
->
->>>>> cpr_find_fd /rom@etc/acpi/tables, id 0 returns -1
->
->>>>> cpr_save_fd /rom@etc/acpi/tables, id 0, fd 35
->
->>>>> qemu_ram_alloc_shared /rom@etc/acpi/tables size 131072 max_size
->
->>>>> 2097152 fd 35 host 0x7fec18200000
->
->>>>> cpr_find_fd /rom@etc/table-loader, id 0 returns -1
->
->>>>> cpr_save_fd /rom@etc/table-loader, id 0, fd 36
->
->>>>> qemu_ram_alloc_shared /rom@etc/table-loader size 4096 max_size 65536
->
->>>>> fd 36 host 0x7feb8b600000
->
->>>>> cpr_find_fd /rom@etc/acpi/rsdp, id 0 returns -1
->
->>>>> cpr_save_fd /rom@etc/acpi/rsdp, id 0, fd 37
->
->>>>> qemu_ram_alloc_shared /rom@etc/acpi/rsdp size 4096 max_size 4096 fd
->
->>>>> 37 host 0x7feb8b400000
->
->>>>>
->
->>>>> cpr_state_save cpr-transfer mode
->
->>>>> cpr_transfer_output /var/run/alma8cpr-dst.sock
->
->>>>
->
->>>> Target:
->
->>>>> cpr_transfer_input /var/run/alma8cpr-dst.sock
->
->>>>> cpr_state_load cpr-transfer mode
->
->>>>> cpr_find_fd pc.bios, id 0 returns 20
->
->>>>> qemu_ram_alloc_shared pc.bios size 262144 max_size 262144 fd 20 host
->
->>>>> 0x7fcdc9800000
->
->>>>> cpr_find_fd pc.rom, id 0 returns 19
->
->>>>> qemu_ram_alloc_shared pc.rom size 131072 max_size 131072 fd 19 host
->
->>>>> 0x7fcdc9600000
->
->>>>> cpr_find_fd 0000:00:01.0/e1000e.rom, id 0 returns 18
->
->>>>> qemu_ram_alloc_shared 0000:00:01.0/e1000e.rom size 262144 max_size
->
->>>>> 262144 fd 18 host 0x7fcdc9400000
->
->>>>> cpr_find_fd 0000:00:02.0/vga.vram, id 0 returns 17
->
->>>>> qemu_ram_alloc_shared 0000:00:02.0/vga.vram size 67108864 max_size
->
->>>>> 67108864 fd 17 host 0x7fcd27e00000
->
->>>>> cpr_find_fd 0000:00:02.0/qxl.vrom, id 0 returns 16
->
->>>>> qemu_ram_alloc_shared 0000:00:02.0/qxl.vrom size 8192 max_size 8192
->
->>>>> fd 16 host 0x7fcdc9200000
->
->>>>> cpr_find_fd 0000:00:02.0/qxl.vram, id 0 returns 15
->
->>>>> qemu_ram_alloc_shared 0000:00:02.0/qxl.vram size 67108864 max_size
->
->>>>> 67108864 fd 15 host 0x7fcd23c00000
->
->>>>> cpr_find_fd 0000:00:02.0/qxl.rom, id 0 returns 14
->
->>>>> qemu_ram_alloc_shared 0000:00:02.0/qxl.rom size 65536 max_size 65536
->
->>>>> fd 14 host 0x7fcdc8800000
->
->>>>> cpr_find_fd /rom@etc/acpi/tables, id 0 returns 13
->
->>>>> qemu_ram_alloc_shared /rom@etc/acpi/tables size 131072 max_size
->
->>>>> 2097152 fd 13 host 0x7fcdc8400000
->
->>>>> cpr_find_fd /rom@etc/table-loader, id 0 returns 11
->
->>>>> qemu_ram_alloc_shared /rom@etc/table-loader size 4096 max_size 65536
->
->>>>> fd 11 host 0x7fcdc8200000
->
->>>>> cpr_find_fd /rom@etc/acpi/rsdp, id 0 returns 10
->
->>>>> qemu_ram_alloc_shared /rom@etc/acpi/rsdp size 4096 max_size 4096 fd
->
->>>>> 10 host 0x7fcd3be00000
->
->>>>
->
->>>> Looks like both vga.vram and qxl.vram are being preserved (with the
->
->>>> same
->
->>>> addresses), and no incompatible ram blocks are found during migration.
->
->>>
->
->>> Sorry, addressed are not the same, of course.  However corresponding
->
->>> ram
->
->>> blocks do seem to be preserved and initialized.
->
->>
->
->> So far, I have not reproduced the guest driver failure.
->
->>
->
->> However, I have isolated places where new QEMU improperly writes to
->
->> the qxl memory regions prior to starting the guest, by mmap'ing them
->
->> readonly after cpr:
->
->>
->
->>    qemu_ram_alloc_internal()
->
->>      if (reused && (strstr(name, "qxl") || strstr("name", "vga")))
->
->>          ram_flags |= RAM_READONLY;
->
->>      new_block = qemu_ram_alloc_from_fd(...)
->
->>
->
->> I have attached a draft fix; try it and let me know.
->
->> My console window looks fine before and after cpr, using
->
->> -vnc $hostip:0 -vga qxl
->
->>
->
->> - Steve
->
->
->
-> Regarding the reproduce: when I launch the buggy version with the same
->
-> options as you, i.e. "-vnc 0.0.0.0:$port -vga qxl", and do cpr-transfer,
->
-> my VNC client silently hangs on the target after a while.  Could it
->
-> happen on your stand as well?Â
->
->
-cpr does not preserve the vnc connection and session.  To test, I specify
->
-port 0 for the source VM and port 1 for the dest.  When the src vnc goes
->
-dormant the dest vnc becomes active.
->
-Sure, I meant that VNC on the dest (on the port 1) works for a while
-after the migration and then hangs, apparently after the guest QXL crash.
-
->
-> Could you try launching VM with
->
-> "-nographic -device qxl-vga"?  That way VM's serial console is given you
->
-> directly in the shell, so when qxl driver crashes you're still able to
->
-> inspect the kernel messages.
->
->
-I have been running like that, but have not reproduced the qxl driver
->
-crash,
->
-and I suspect my guest image+kernel is too old.
-Yes, that's probably the case.  But the crash occurs on my Fedora 41
-guest with the 6.11.5-300.fc41.x86_64 kernel, so newer kernels seem to
-be buggy.
-
-
->
-However, once I realized the
->
-issue was post-cpr modification of qxl memory, I switched my attention
->
-to the
->
-fix.
->
->
-> As for your patch, I can report that it doesn't resolve the issue as it
->
-> is.  But I was able to track down another possible memory corruption
->
-> using your approach with readonly mmap'ing:
->
->
->
->> Program terminated with signal SIGSEGV, Segmentation fault.
->
->> #0  init_qxl_ram (d=0x5638996e0e70) at ../hw/display/qxl.c:412
->
->> 412         d->ram->magic       = cpu_to_le32(QXL_RAM_MAGIC);
->
->> [Current thread is 1 (Thread 0x7f1a4f83b480 (LWP 229798))]
->
->> (gdb) bt
->
->> #0  init_qxl_ram (d=0x5638996e0e70) at ../hw/display/qxl.c:412
->
->> #1  0x0000563896e7f467 in qxl_realize_common (qxl=0x5638996e0e70,
->
->> errp=0x7ffd3c2b8170) at ../hw/display/qxl.c:2142
->
->> #2  0x0000563896e7fda1 in qxl_realize_primary (dev=0x5638996e0e70,
->
->> errp=0x7ffd3c2b81d0) at ../hw/display/qxl.c:2257
->
->> #3  0x0000563896c7e8f2 in pci_qdev_realize (qdev=0x5638996e0e70,
->
->> errp=0x7ffd3c2b8250) at ../hw/pci/pci.c:2174
->
->> #4  0x00005638970eb54b in device_set_realized (obj=0x5638996e0e70,
->
->> value=true, errp=0x7ffd3c2b84e0) at ../hw/core/qdev.c:494
->
->> #5  0x00005638970f5e14 in property_set_bool (obj=0x5638996e0e70,
->
->> v=0x5638996f3770, name=0x56389759b141 "realized",
->
->> opaque=0x5638987893d0, errp=0x7ffd3c2b84e0)
->
->>      at ../qom/object.c:2374
->
->> #6  0x00005638970f39f8 in object_property_set (obj=0x5638996e0e70,
->
->> name=0x56389759b141 "realized", v=0x5638996f3770, errp=0x7ffd3c2b84e0)
->
->>      at ../qom/object.c:1449
->
->> #7  0x00005638970f8586 in object_property_set_qobject
->
->> (obj=0x5638996e0e70, name=0x56389759b141 "realized",
->
->> value=0x5638996df900, errp=0x7ffd3c2b84e0)
->
->>      at ../qom/qom-qobject.c:28
->
->> #8  0x00005638970f3d8d in object_property_set_bool
->
->> (obj=0x5638996e0e70, name=0x56389759b141 "realized", value=true,
->
->> errp=0x7ffd3c2b84e0)
->
->>      at ../qom/object.c:1519
->
->> #9  0x00005638970eacb0 in qdev_realize (dev=0x5638996e0e70,
->
->> bus=0x563898cf3c20, errp=0x7ffd3c2b84e0) at ../hw/core/qdev.c:276
->
->> #10 0x0000563896dba675 in qdev_device_add_from_qdict
->
->> (opts=0x5638996dfe50, from_json=false, errp=0x7ffd3c2b84e0) at ../
->
->> system/qdev-monitor.c:714
->
->> #11 0x0000563896dba721 in qdev_device_add (opts=0x563898786150,
->
->> errp=0x56389855dc40 <error_fatal>) at ../system/qdev-monitor.c:733
->
->> #12 0x0000563896dc48f1 in device_init_func (opaque=0x0,
->
->> opts=0x563898786150, errp=0x56389855dc40 <error_fatal>) at ../system/
->
->> vl.c:1207
->
->> #13 0x000056389737a6cc in qemu_opts_foreach
->
->>      (list=0x563898427b60 <qemu_device_opts>, func=0x563896dc48ca
->
->> <device_init_func>, opaque=0x0, errp=0x56389855dc40 <error_fatal>)
->
->>      at ../util/qemu-option.c:1135
->
->> #14 0x0000563896dc89b5 in qemu_create_cli_devices () at ../system/
->
->> vl.c:2745
->
->> #15 0x0000563896dc8c00 in qmp_x_exit_preconfig (errp=0x56389855dc40
->
->> <error_fatal>) at ../system/vl.c:2806
->
->> #16 0x0000563896dcb5de in qemu_init (argc=33, argv=0x7ffd3c2b8948)
->
->> at ../system/vl.c:3838
->
->> #17 0x0000563897297323 in main (argc=33, argv=0x7ffd3c2b8948) at ../
->
->> system/main.c:72
->
->
->
-> So the attached adjusted version of your patch does seem to help.  At
->
-> least I can't reproduce the crash on my stand.
->
->
-Thanks for the stack trace; the calls to SPICE_RING_INIT in init_qxl_ram
->
-are
->
-definitely harmful.  Try V2 of the patch, attached, which skips the lines
->
-of init_qxl_ram that modify guest memory.
->
-Thanks, your v2 patch does seem to prevent the crash.  Would you re-send
-it to the list as a proper fix?
-
->
-> I'm wondering, could it be useful to explicitly mark all the reused
->
-> memory regions readonly upon cpr-transfer, and then make them writable
->
-> back again after the migration is done?  That way we will be segfaulting
->
-> early on instead of debugging tricky memory corruptions.
->
->
-It's a useful debugging technique, but changing protection on a large
->
-memory region
->
-can be too expensive for production due to TLB shootdowns.
->
->
-Also, there are cases where writes are performed but the value is
->
-guaranteed to
->
-be the same:
->
-  qxl_post_load()
->
-    qxl_set_mode()
->
-      d->rom->mode = cpu_to_le32(modenr);
->
-The value is the same because mode and shadow_rom.mode were passed in
->
-vmstate
->
-from old qemu.
->
-There're also cases where devices' ROM might be re-initialized.  E.g.
-this segfault occures upon further exploration of RO mapped RAM blocks:
-
->
-Program terminated with signal SIGSEGV, Segmentation fault.
->
-#0  __memmove_avx_unaligned_erms () at
->
-../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:664
->
-664             rep     movsb
->
-[Current thread is 1 (Thread 0x7f6e7d08b480 (LWP 310379))]
->
-(gdb) bt
->
-#0  __memmove_avx_unaligned_erms () at
->
-../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:664
->
-#1  0x000055aa1d030ecd in rom_set_mr (rom=0x55aa200ba380,
->
-owner=0x55aa2019ac10, name=0x7fffb8272bc0 "/rom@etc/acpi/tables", ro=true)
->
-at ../hw/core/loader.c:1032
->
-#2  0x000055aa1d031577 in rom_add_blob
->
-(name=0x55aa1da51f13 "etc/acpi/tables", blob=0x55aa208a1070, len=131072,
->
-max_len=2097152, addr=18446744073709551615, fw_file_name=0x55aa1da51f13
->
-"etc/acpi/tables", fw_callback=0x55aa1d441f59 <acpi_build_update>,
->
-callback_opaque=0x55aa20ff0010, as=0x0, read_only=true) at
->
-../hw/core/loader.c:1147
->
-#3  0x000055aa1cfd788d in acpi_add_rom_blob
->
-(update=0x55aa1d441f59 <acpi_build_update>, opaque=0x55aa20ff0010,
->
-blob=0x55aa1fc9aa00, name=0x55aa1da51f13 "etc/acpi/tables") at
->
-../hw/acpi/utils.c:46
->
-#4  0x000055aa1d44213f in acpi_setup () at ../hw/i386/acpi-build.c:2720
->
-#5  0x000055aa1d434199 in pc_machine_done (notifier=0x55aa1ff15050, data=0x0)
->
-at ../hw/i386/pc.c:638
->
-#6  0x000055aa1d876845 in notifier_list_notify (list=0x55aa1ea25c10
->
-<machine_init_done_notifiers>, data=0x0) at ../util/notify.c:39
->
-#7  0x000055aa1d039ee5 in qdev_machine_creation_done () at
->
-../hw/core/machine.c:1749
->
-#8  0x000055aa1d2c7b3e in qemu_machine_creation_done (errp=0x55aa1ea5cc40
->
-<error_fatal>) at ../system/vl.c:2779
->
-#9  0x000055aa1d2c7c7d in qmp_x_exit_preconfig (errp=0x55aa1ea5cc40
->
-<error_fatal>) at ../system/vl.c:2807
->
-#10 0x000055aa1d2ca64f in qemu_init (argc=35, argv=0x7fffb82730e8) at
->
-../system/vl.c:3838
->
-#11 0x000055aa1d79638c in main (argc=35, argv=0x7fffb82730e8) at
->
-../system/main.c:72
-I'm not sure whether ACPI tables ROM in particular is rewritten with the
-same content, but there might be cases where ROM can be read from file
-system upon initialization.  That is undesirable as guest kernel
-certainly won't be too happy about sudden change of the device's ROM
-content.
-
-So the issue we're dealing with here is any unwanted memory related
-device initialization upon cpr.
-
-For now the only thing that comes to my mind is to make a test where we
-put as many devices as we can into a VM, make ram blocks RO upon cpr
-(and remap them as RW later after migration is done, if needed), and
-catch any unwanted memory violations.  As Den suggested, we might
-consider adding that behaviour as a separate non-default option (or
-"migrate" command flag specific to cpr-transfer), which would only be
-used in the testing.
-
-Andrey
-
-On 3/6/25 16:16, Andrey Drobyshev wrote:
-On 3/5/25 11:19 PM, Steven Sistare wrote:
-On 3/5/2025 11:50 AM, Andrey Drobyshev wrote:
-On 3/4/25 9:05 PM, Steven Sistare wrote:
-On 2/28/2025 1:37 PM, Andrey Drobyshev wrote:
-On 2/28/25 8:35 PM, Andrey Drobyshev wrote:
-On 2/28/25 8:20 PM, Steven Sistare wrote:
-On 2/28/2025 1:13 PM, Steven Sistare wrote:
-On 2/28/2025 12:39 PM, Andrey Drobyshev wrote:
-Hi all,
-
-We've been experimenting with cpr-transfer migration mode recently
-and
-have discovered the following issue with the guest QXL driver:
-
-Run migration source:
-EMULATOR=/path/to/emulator
-ROOTFS=/path/to/image
-QMPSOCK=/var/run/alma8qmp-src.sock
-
-$EMULATOR -enable-kvm \
-        -machine q35 \
-        -cpu host -smp 2 -m 2G \
-        -object memory-backend-file,id=ram0,size=2G,mem-path=/
-dev/shm/
-ram0,share=on\
-        -machine memory-backend=ram0 \
-        -machine aux-ram-share=on \
-        -drive file=$ROOTFS,media=disk,if=virtio \
-        -qmp unix:$QMPSOCK,server=on,wait=off \
-        -nographic \
-        -device qxl-vga
-Run migration target:
-EMULATOR=/path/to/emulator
-ROOTFS=/path/to/image
-QMPSOCK=/var/run/alma8qmp-dst.sock
-$EMULATOR -enable-kvm \
-        -machine q35 \
-        -cpu host -smp 2 -m 2G \
-        -object memory-backend-file,id=ram0,size=2G,mem-path=/
-dev/shm/
-ram0,share=on\
-        -machine memory-backend=ram0 \
-        -machine aux-ram-share=on \
-        -drive file=$ROOTFS,media=disk,if=virtio \
-        -qmp unix:$QMPSOCK,server=on,wait=off \
-        -nographic \
-        -device qxl-vga \
-        -incoming tcp:0:44444 \
-        -incoming '{"channel-type": "cpr", "addr": { "transport":
-"socket", "type": "unix", "path": "/var/run/alma8cpr-dst.sock"}}'
-Launch the migration:
-QMPSHELL=/root/src/qemu/master/scripts/qmp/qmp-shell
-QMPSOCK=/var/run/alma8qmp-src.sock
-
-$QMPSHELL -p $QMPSOCK <<EOF
-        migrate-set-parameters mode=cpr-transfer
-        migrate channels=[{"channel-type":"main","addr":
-{"transport":"socket","type":"inet","host":"0","port":"44444"}},
-{"channel-type":"cpr","addr":
-{"transport":"socket","type":"unix","path":"/var/run/alma8cpr-
-dst.sock"}}]
-EOF
-Then, after a while, QXL guest driver on target crashes spewing the
-following messages:
-[   73.962002] [TTM] Buffer eviction failed
-[   73.962072] qxl 0000:00:02.0: object_init failed for (3149824,
-0x00000001)
-[   73.962081] [drm:qxl_alloc_bo_reserved [qxl]] *ERROR* failed to
-allocate VRAM BO
-That seems to be a known kernel QXL driver bug:
-https://lore.kernel.org/all/20220907094423.93581-1-
-min_halo@163.com/T/
-https://lore.kernel.org/lkml/ZTgydqRlK6WX_b29@eldamar.lan/
-(the latter discussion contains that reproduce script which
-speeds up
-the crash in the guest):
-#!/bin/bash
-
-chvt 3
-
-for j in $(seq 80); do
-            echo "$(date) starting round $j"
-            if [ "$(journalctl --boot | grep "failed to allocate
-VRAM
-BO")" != "" ]; then
-                    echo "bug was reproduced after $j tries"
-                    exit 1
-            fi
-            for i in $(seq 100); do
-                    dmesg > /dev/tty3
-            done
-done
-
-echo "bug could not be reproduced"
-exit 0
-The bug itself seems to remain unfixed, as I was able to reproduce
-that
-with Fedora 41 guest, as well as AlmaLinux 8 guest. However our
-cpr-transfer code also seems to be buggy as it triggers the crash -
-without the cpr-transfer migration the above reproduce doesn't
-lead to
-crash on the source VM.
-
-I suspect that, as cpr-transfer doesn't migrate the guest
-memory, but
-rather passes it through the memory backend object, our code might
-somehow corrupt the VRAM.  However, I wasn't able to trace the
-corruption so far.
-
-Could somebody help the investigation and take a look into
-this?  Any
-suggestions would be appreciated.  Thanks!
-Possibly some memory region created by qxl is not being preserved.
-Try adding these traces to see what is preserved:
-
--trace enable='*cpr*'
--trace enable='*ram_alloc*'
-Also try adding this patch to see if it flags any ram blocks as not
-compatible with cpr.  A message is printed at migration start time.
-   Â
-https://lore.kernel.org/qemu-devel/1740667681-257312-1-git-send-
-email-
-steven.sistare@oracle.com/
-
-- Steve
-With the traces enabled + the "migration: ram block cpr blockers"
-patch
-applied:
-
-Source:
-cpr_find_fd pc.bios, id 0 returns -1
-cpr_save_fd pc.bios, id 0, fd 22
-qemu_ram_alloc_shared pc.bios size 262144 max_size 262144 fd 22 host
-0x7fec18e00000
-cpr_find_fd pc.rom, id 0 returns -1
-cpr_save_fd pc.rom, id 0, fd 23
-qemu_ram_alloc_shared pc.rom size 131072 max_size 131072 fd 23 host
-0x7fec18c00000
-cpr_find_fd 0000:00:01.0/e1000e.rom, id 0 returns -1
-cpr_save_fd 0000:00:01.0/e1000e.rom, id 0, fd 24
-qemu_ram_alloc_shared 0000:00:01.0/e1000e.rom size 262144 max_size
-262144 fd 24 host 0x7fec18a00000
-cpr_find_fd 0000:00:02.0/vga.vram, id 0 returns -1
-cpr_save_fd 0000:00:02.0/vga.vram, id 0, fd 25
-qemu_ram_alloc_shared 0000:00:02.0/vga.vram size 67108864 max_size
-67108864 fd 25 host 0x7feb77e00000
-cpr_find_fd 0000:00:02.0/qxl.vrom, id 0 returns -1
-cpr_save_fd 0000:00:02.0/qxl.vrom, id 0, fd 27
-qemu_ram_alloc_shared 0000:00:02.0/qxl.vrom size 8192 max_size 8192
-fd 27 host 0x7fec18800000
-cpr_find_fd 0000:00:02.0/qxl.vram, id 0 returns -1
-cpr_save_fd 0000:00:02.0/qxl.vram, id 0, fd 28
-qemu_ram_alloc_shared 0000:00:02.0/qxl.vram size 67108864 max_size
-67108864 fd 28 host 0x7feb73c00000
-cpr_find_fd 0000:00:02.0/qxl.rom, id 0 returns -1
-cpr_save_fd 0000:00:02.0/qxl.rom, id 0, fd 34
-qemu_ram_alloc_shared 0000:00:02.0/qxl.rom size 65536 max_size 65536
-fd 34 host 0x7fec18600000
-cpr_find_fd /rom@etc/acpi/tables, id 0 returns -1
-cpr_save_fd /rom@etc/acpi/tables, id 0, fd 35
-qemu_ram_alloc_shared /rom@etc/acpi/tables size 131072 max_size
-2097152 fd 35 host 0x7fec18200000
-cpr_find_fd /rom@etc/table-loader, id 0 returns -1
-cpr_save_fd /rom@etc/table-loader, id 0, fd 36
-qemu_ram_alloc_shared /rom@etc/table-loader size 4096 max_size 65536
-fd 36 host 0x7feb8b600000
-cpr_find_fd /rom@etc/acpi/rsdp, id 0 returns -1
-cpr_save_fd /rom@etc/acpi/rsdp, id 0, fd 37
-qemu_ram_alloc_shared /rom@etc/acpi/rsdp size 4096 max_size 4096 fd
-37 host 0x7feb8b400000
-
-cpr_state_save cpr-transfer mode
-cpr_transfer_output /var/run/alma8cpr-dst.sock
-Target:
-cpr_transfer_input /var/run/alma8cpr-dst.sock
-cpr_state_load cpr-transfer mode
-cpr_find_fd pc.bios, id 0 returns 20
-qemu_ram_alloc_shared pc.bios size 262144 max_size 262144 fd 20 host
-0x7fcdc9800000
-cpr_find_fd pc.rom, id 0 returns 19
-qemu_ram_alloc_shared pc.rom size 131072 max_size 131072 fd 19 host
-0x7fcdc9600000
-cpr_find_fd 0000:00:01.0/e1000e.rom, id 0 returns 18
-qemu_ram_alloc_shared 0000:00:01.0/e1000e.rom size 262144 max_size
-262144 fd 18 host 0x7fcdc9400000
-cpr_find_fd 0000:00:02.0/vga.vram, id 0 returns 17
-qemu_ram_alloc_shared 0000:00:02.0/vga.vram size 67108864 max_size
-67108864 fd 17 host 0x7fcd27e00000
-cpr_find_fd 0000:00:02.0/qxl.vrom, id 0 returns 16
-qemu_ram_alloc_shared 0000:00:02.0/qxl.vrom size 8192 max_size 8192
-fd 16 host 0x7fcdc9200000
-cpr_find_fd 0000:00:02.0/qxl.vram, id 0 returns 15
-qemu_ram_alloc_shared 0000:00:02.0/qxl.vram size 67108864 max_size
-67108864 fd 15 host 0x7fcd23c00000
-cpr_find_fd 0000:00:02.0/qxl.rom, id 0 returns 14
-qemu_ram_alloc_shared 0000:00:02.0/qxl.rom size 65536 max_size 65536
-fd 14 host 0x7fcdc8800000
-cpr_find_fd /rom@etc/acpi/tables, id 0 returns 13
-qemu_ram_alloc_shared /rom@etc/acpi/tables size 131072 max_size
-2097152 fd 13 host 0x7fcdc8400000
-cpr_find_fd /rom@etc/table-loader, id 0 returns 11
-qemu_ram_alloc_shared /rom@etc/table-loader size 4096 max_size 65536
-fd 11 host 0x7fcdc8200000
-cpr_find_fd /rom@etc/acpi/rsdp, id 0 returns 10
-qemu_ram_alloc_shared /rom@etc/acpi/rsdp size 4096 max_size 4096 fd
-10 host 0x7fcd3be00000
-Looks like both vga.vram and qxl.vram are being preserved (with the
-same
-addresses), and no incompatible ram blocks are found during migration.
-Sorry, addressed are not the same, of course.  However corresponding
-ram
-blocks do seem to be preserved and initialized.
-So far, I have not reproduced the guest driver failure.
-
-However, I have isolated places where new QEMU improperly writes to
-the qxl memory regions prior to starting the guest, by mmap'ing them
-readonly after cpr:
-
-    qemu_ram_alloc_internal()
-      if (reused && (strstr(name, "qxl") || strstr("name", "vga")))
-          ram_flags |= RAM_READONLY;
-      new_block = qemu_ram_alloc_from_fd(...)
-
-I have attached a draft fix; try it and let me know.
-My console window looks fine before and after cpr, using
--vnc $hostip:0 -vga qxl
-
-- Steve
-Regarding the reproduce: when I launch the buggy version with the same
-options as you, i.e. "-vnc 0.0.0.0:$port -vga qxl", and do cpr-transfer,
-my VNC client silently hangs on the target after a while.  Could it
-happen on your stand as well?
-cpr does not preserve the vnc connection and session.  To test, I specify
-port 0 for the source VM and port 1 for the dest.  When the src vnc goes
-dormant the dest vnc becomes active.
-Sure, I meant that VNC on the dest (on the port 1) works for a while
-after the migration and then hangs, apparently after the guest QXL crash.
-Could you try launching VM with
-"-nographic -device qxl-vga"?  That way VM's serial console is given you
-directly in the shell, so when qxl driver crashes you're still able to
-inspect the kernel messages.
-I have been running like that, but have not reproduced the qxl driver
-crash,
-and I suspect my guest image+kernel is too old.
-Yes, that's probably the case.  But the crash occurs on my Fedora 41
-guest with the 6.11.5-300.fc41.x86_64 kernel, so newer kernels seem to
-be buggy.
-However, once I realized the
-issue was post-cpr modification of qxl memory, I switched my attention
-to the
-fix.
-As for your patch, I can report that it doesn't resolve the issue as it
-is.  But I was able to track down another possible memory corruption
-using your approach with readonly mmap'ing:
-Program terminated with signal SIGSEGV, Segmentation fault.
-#0  init_qxl_ram (d=0x5638996e0e70) at ../hw/display/qxl.c:412
-412         d->ram->magic       = cpu_to_le32(QXL_RAM_MAGIC);
-[Current thread is 1 (Thread 0x7f1a4f83b480 (LWP 229798))]
-(gdb) bt
-#0  init_qxl_ram (d=0x5638996e0e70) at ../hw/display/qxl.c:412
-#1  0x0000563896e7f467 in qxl_realize_common (qxl=0x5638996e0e70,
-errp=0x7ffd3c2b8170) at ../hw/display/qxl.c:2142
-#2  0x0000563896e7fda1 in qxl_realize_primary (dev=0x5638996e0e70,
-errp=0x7ffd3c2b81d0) at ../hw/display/qxl.c:2257
-#3  0x0000563896c7e8f2 in pci_qdev_realize (qdev=0x5638996e0e70,
-errp=0x7ffd3c2b8250) at ../hw/pci/pci.c:2174
-#4  0x00005638970eb54b in device_set_realized (obj=0x5638996e0e70,
-value=true, errp=0x7ffd3c2b84e0) at ../hw/core/qdev.c:494
-#5  0x00005638970f5e14 in property_set_bool (obj=0x5638996e0e70,
-v=0x5638996f3770, name=0x56389759b141 "realized",
-opaque=0x5638987893d0, errp=0x7ffd3c2b84e0)
-      at ../qom/object.c:2374
-#6  0x00005638970f39f8 in object_property_set (obj=0x5638996e0e70,
-name=0x56389759b141 "realized", v=0x5638996f3770, errp=0x7ffd3c2b84e0)
-      at ../qom/object.c:1449
-#7  0x00005638970f8586 in object_property_set_qobject
-(obj=0x5638996e0e70, name=0x56389759b141 "realized",
-value=0x5638996df900, errp=0x7ffd3c2b84e0)
-      at ../qom/qom-qobject.c:28
-#8  0x00005638970f3d8d in object_property_set_bool
-(obj=0x5638996e0e70, name=0x56389759b141 "realized", value=true,
-errp=0x7ffd3c2b84e0)
-      at ../qom/object.c:1519
-#9  0x00005638970eacb0 in qdev_realize (dev=0x5638996e0e70,
-bus=0x563898cf3c20, errp=0x7ffd3c2b84e0) at ../hw/core/qdev.c:276
-#10 0x0000563896dba675 in qdev_device_add_from_qdict
-(opts=0x5638996dfe50, from_json=false, errp=0x7ffd3c2b84e0) at ../
-system/qdev-monitor.c:714
-#11 0x0000563896dba721 in qdev_device_add (opts=0x563898786150,
-errp=0x56389855dc40 <error_fatal>) at ../system/qdev-monitor.c:733
-#12 0x0000563896dc48f1 in device_init_func (opaque=0x0,
-opts=0x563898786150, errp=0x56389855dc40 <error_fatal>) at ../system/
-vl.c:1207
-#13 0x000056389737a6cc in qemu_opts_foreach
-      (list=0x563898427b60 <qemu_device_opts>, func=0x563896dc48ca
-<device_init_func>, opaque=0x0, errp=0x56389855dc40 <error_fatal>)
-      at ../util/qemu-option.c:1135
-#14 0x0000563896dc89b5 in qemu_create_cli_devices () at ../system/
-vl.c:2745
-#15 0x0000563896dc8c00 in qmp_x_exit_preconfig (errp=0x56389855dc40
-<error_fatal>) at ../system/vl.c:2806
-#16 0x0000563896dcb5de in qemu_init (argc=33, argv=0x7ffd3c2b8948)
-at ../system/vl.c:3838
-#17 0x0000563897297323 in main (argc=33, argv=0x7ffd3c2b8948) at ../
-system/main.c:72
-So the attached adjusted version of your patch does seem to help.  At
-least I can't reproduce the crash on my stand.
-Thanks for the stack trace; the calls to SPICE_RING_INIT in init_qxl_ram
-are
-definitely harmful.  Try V2 of the patch, attached, which skips the lines
-of init_qxl_ram that modify guest memory.
-Thanks, your v2 patch does seem to prevent the crash.  Would you re-send
-it to the list as a proper fix?
-I'm wondering, could it be useful to explicitly mark all the reused
-memory regions readonly upon cpr-transfer, and then make them writable
-back again after the migration is done?  That way we will be segfaulting
-early on instead of debugging tricky memory corruptions.
-It's a useful debugging technique, but changing protection on a large
-memory region
-can be too expensive for production due to TLB shootdowns.
-
-Also, there are cases where writes are performed but the value is
-guaranteed to
-be the same:
-   qxl_post_load()
-     qxl_set_mode()
-       d->rom->mode = cpu_to_le32(modenr);
-The value is the same because mode and shadow_rom.mode were passed in
-vmstate
-from old qemu.
-There're also cases where devices' ROM might be re-initialized.  E.g.
-this segfault occures upon further exploration of RO mapped RAM blocks:
-Program terminated with signal SIGSEGV, Segmentation fault.
-#0  __memmove_avx_unaligned_erms () at 
-../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:664
-664             rep     movsb
-[Current thread is 1 (Thread 0x7f6e7d08b480 (LWP 310379))]
-(gdb) bt
-#0  __memmove_avx_unaligned_erms () at 
-../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:664
-#1  0x000055aa1d030ecd in rom_set_mr (rom=0x55aa200ba380, owner=0x55aa2019ac10, 
-name=0x7fffb8272bc0 "/rom@etc/acpi/tables", ro=true)
-     at ../hw/core/loader.c:1032
-#2  0x000055aa1d031577 in rom_add_blob
-     (name=0x55aa1da51f13 "etc/acpi/tables", blob=0x55aa208a1070, len=131072, max_len=2097152, 
-addr=18446744073709551615, fw_file_name=0x55aa1da51f13 "etc/acpi/tables", 
-fw_callback=0x55aa1d441f59 <acpi_build_update>, callback_opaque=0x55aa20ff0010, as=0x0, 
-read_only=true) at ../hw/core/loader.c:1147
-#3  0x000055aa1cfd788d in acpi_add_rom_blob
-     (update=0x55aa1d441f59 <acpi_build_update>, opaque=0x55aa20ff0010, 
-blob=0x55aa1fc9aa00, name=0x55aa1da51f13 "etc/acpi/tables") at ../hw/acpi/utils.c:46
-#4  0x000055aa1d44213f in acpi_setup () at ../hw/i386/acpi-build.c:2720
-#5  0x000055aa1d434199 in pc_machine_done (notifier=0x55aa1ff15050, data=0x0) 
-at ../hw/i386/pc.c:638
-#6  0x000055aa1d876845 in notifier_list_notify (list=0x55aa1ea25c10 
-<machine_init_done_notifiers>, data=0x0) at ../util/notify.c:39
-#7  0x000055aa1d039ee5 in qdev_machine_creation_done () at 
-../hw/core/machine.c:1749
-#8  0x000055aa1d2c7b3e in qemu_machine_creation_done (errp=0x55aa1ea5cc40 
-<error_fatal>) at ../system/vl.c:2779
-#9  0x000055aa1d2c7c7d in qmp_x_exit_preconfig (errp=0x55aa1ea5cc40 
-<error_fatal>) at ../system/vl.c:2807
-#10 0x000055aa1d2ca64f in qemu_init (argc=35, argv=0x7fffb82730e8) at 
-../system/vl.c:3838
-#11 0x000055aa1d79638c in main (argc=35, argv=0x7fffb82730e8) at 
-../system/main.c:72
-I'm not sure whether ACPI tables ROM in particular is rewritten with the
-same content, but there might be cases where ROM can be read from file
-system upon initialization.  That is undesirable as guest kernel
-certainly won't be too happy about sudden change of the device's ROM
-content.
-
-So the issue we're dealing with here is any unwanted memory related
-device initialization upon cpr.
-
-For now the only thing that comes to my mind is to make a test where we
-put as many devices as we can into a VM, make ram blocks RO upon cpr
-(and remap them as RW later after migration is done, if needed), and
-catch any unwanted memory violations.  As Den suggested, we might
-consider adding that behaviour as a separate non-default option (or
-"migrate" command flag specific to cpr-transfer), which would only be
-used in the testing.
-
-Andrey
-No way. ACPI with the source must be used in the same way as BIOSes
-and optional ROMs.
-
-Den
-
-On 3/6/2025 10:52 AM, Denis V. Lunev wrote:
-On 3/6/25 16:16, Andrey Drobyshev wrote:
-On 3/5/25 11:19 PM, Steven Sistare wrote:
-On 3/5/2025 11:50 AM, Andrey Drobyshev wrote:
-On 3/4/25 9:05 PM, Steven Sistare wrote:
-On 2/28/2025 1:37 PM, Andrey Drobyshev wrote:
-On 2/28/25 8:35 PM, Andrey Drobyshev wrote:
-On 2/28/25 8:20 PM, Steven Sistare wrote:
-On 2/28/2025 1:13 PM, Steven Sistare wrote:
-On 2/28/2025 12:39 PM, Andrey Drobyshev wrote:
-Hi all,
-
-We've been experimenting with cpr-transfer migration mode recently
-and
-have discovered the following issue with the guest QXL driver:
-
-Run migration source:
-EMULATOR=/path/to/emulator
-ROOTFS=/path/to/image
-QMPSOCK=/var/run/alma8qmp-src.sock
-
-$EMULATOR -enable-kvm \
-        -machine q35 \
-        -cpu host -smp 2 -m 2G \
-        -object memory-backend-file,id=ram0,size=2G,mem-path=/
-dev/shm/
-ram0,share=on\
-        -machine memory-backend=ram0 \
-        -machine aux-ram-share=on \
-        -drive file=$ROOTFS,media=disk,if=virtio \
-        -qmp unix:$QMPSOCK,server=on,wait=off \
-        -nographic \
-        -device qxl-vga
-Run migration target:
-EMULATOR=/path/to/emulator
-ROOTFS=/path/to/image
-QMPSOCK=/var/run/alma8qmp-dst.sock
-$EMULATOR -enable-kvm \
-        -machine q35 \
-        -cpu host -smp 2 -m 2G \
-        -object memory-backend-file,id=ram0,size=2G,mem-path=/
-dev/shm/
-ram0,share=on\
-        -machine memory-backend=ram0 \
-        -machine aux-ram-share=on \
-        -drive file=$ROOTFS,media=disk,if=virtio \
-        -qmp unix:$QMPSOCK,server=on,wait=off \
-        -nographic \
-        -device qxl-vga \
-        -incoming tcp:0:44444 \
-        -incoming '{"channel-type": "cpr", "addr": { "transport":
-"socket", "type": "unix", "path": "/var/run/alma8cpr-dst.sock"}}'
-Launch the migration:
-QMPSHELL=/root/src/qemu/master/scripts/qmp/qmp-shell
-QMPSOCK=/var/run/alma8qmp-src.sock
-
-$QMPSHELL -p $QMPSOCK <<EOF
-        migrate-set-parameters mode=cpr-transfer
-        migrate channels=[{"channel-type":"main","addr":
-{"transport":"socket","type":"inet","host":"0","port":"44444"}},
-{"channel-type":"cpr","addr":
-{"transport":"socket","type":"unix","path":"/var/run/alma8cpr-
-dst.sock"}}]
-EOF
-Then, after a while, QXL guest driver on target crashes spewing the
-following messages:
-[   73.962002] [TTM] Buffer eviction failed
-[   73.962072] qxl 0000:00:02.0: object_init failed for (3149824,
-0x00000001)
-[   73.962081] [drm:qxl_alloc_bo_reserved [qxl]] *ERROR* failed to
-allocate VRAM BO
-That seems to be a known kernel QXL driver bug:
-https://lore.kernel.org/all/20220907094423.93581-1-
-min_halo@163.com/T/
-https://lore.kernel.org/lkml/ZTgydqRlK6WX_b29@eldamar.lan/
-(the latter discussion contains that reproduce script which
-speeds up
-the crash in the guest):
-#!/bin/bash
-
-chvt 3
-
-for j in $(seq 80); do
-            echo "$(date) starting round $j"
-            if [ "$(journalctl --boot | grep "failed to allocate
-VRAM
-BO")" != "" ]; then
-                    echo "bug was reproduced after $j tries"
-                    exit 1
-            fi
-            for i in $(seq 100); do
-                    dmesg > /dev/tty3
-            done
-done
-
-echo "bug could not be reproduced"
-exit 0
-The bug itself seems to remain unfixed, as I was able to reproduce
-that
-with Fedora 41 guest, as well as AlmaLinux 8 guest. However our
-cpr-transfer code also seems to be buggy as it triggers the crash -
-without the cpr-transfer migration the above reproduce doesn't
-lead to
-crash on the source VM.
-
-I suspect that, as cpr-transfer doesn't migrate the guest
-memory, but
-rather passes it through the memory backend object, our code might
-somehow corrupt the VRAM.  However, I wasn't able to trace the
-corruption so far.
-
-Could somebody help the investigation and take a look into
-this?  Any
-suggestions would be appreciated.  Thanks!
-Possibly some memory region created by qxl is not being preserved.
-Try adding these traces to see what is preserved:
-
--trace enable='*cpr*'
--trace enable='*ram_alloc*'
-Also try adding this patch to see if it flags any ram blocks as not
-compatible with cpr.  A message is printed at migration start time.
-   Â
-https://lore.kernel.org/qemu-devel/1740667681-257312-1-git-send-
-email-
-steven.sistare@oracle.com/
-
-- Steve
-With the traces enabled + the "migration: ram block cpr blockers"
-patch
-applied:
-
-Source:
-cpr_find_fd pc.bios, id 0 returns -1
-cpr_save_fd pc.bios, id 0, fd 22
-qemu_ram_alloc_shared pc.bios size 262144 max_size 262144 fd 22 host
-0x7fec18e00000
-cpr_find_fd pc.rom, id 0 returns -1
-cpr_save_fd pc.rom, id 0, fd 23
-qemu_ram_alloc_shared pc.rom size 131072 max_size 131072 fd 23 host
-0x7fec18c00000
-cpr_find_fd 0000:00:01.0/e1000e.rom, id 0 returns -1
-cpr_save_fd 0000:00:01.0/e1000e.rom, id 0, fd 24
-qemu_ram_alloc_shared 0000:00:01.0/e1000e.rom size 262144 max_size
-262144 fd 24 host 0x7fec18a00000
-cpr_find_fd 0000:00:02.0/vga.vram, id 0 returns -1
-cpr_save_fd 0000:00:02.0/vga.vram, id 0, fd 25
-qemu_ram_alloc_shared 0000:00:02.0/vga.vram size 67108864 max_size
-67108864 fd 25 host 0x7feb77e00000
-cpr_find_fd 0000:00:02.0/qxl.vrom, id 0 returns -1
-cpr_save_fd 0000:00:02.0/qxl.vrom, id 0, fd 27
-qemu_ram_alloc_shared 0000:00:02.0/qxl.vrom size 8192 max_size 8192
-fd 27 host 0x7fec18800000
-cpr_find_fd 0000:00:02.0/qxl.vram, id 0 returns -1
-cpr_save_fd 0000:00:02.0/qxl.vram, id 0, fd 28
-qemu_ram_alloc_shared 0000:00:02.0/qxl.vram size 67108864 max_size
-67108864 fd 28 host 0x7feb73c00000
-cpr_find_fd 0000:00:02.0/qxl.rom, id 0 returns -1
-cpr_save_fd 0000:00:02.0/qxl.rom, id 0, fd 34
-qemu_ram_alloc_shared 0000:00:02.0/qxl.rom size 65536 max_size 65536
-fd 34 host 0x7fec18600000
-cpr_find_fd /rom@etc/acpi/tables, id 0 returns -1
-cpr_save_fd /rom@etc/acpi/tables, id 0, fd 35
-qemu_ram_alloc_shared /rom@etc/acpi/tables size 131072 max_size
-2097152 fd 35 host 0x7fec18200000
-cpr_find_fd /rom@etc/table-loader, id 0 returns -1
-cpr_save_fd /rom@etc/table-loader, id 0, fd 36
-qemu_ram_alloc_shared /rom@etc/table-loader size 4096 max_size 65536
-fd 36 host 0x7feb8b600000
-cpr_find_fd /rom@etc/acpi/rsdp, id 0 returns -1
-cpr_save_fd /rom@etc/acpi/rsdp, id 0, fd 37
-qemu_ram_alloc_shared /rom@etc/acpi/rsdp size 4096 max_size 4096 fd
-37 host 0x7feb8b400000
-
-cpr_state_save cpr-transfer mode
-cpr_transfer_output /var/run/alma8cpr-dst.sock
-Target:
-cpr_transfer_input /var/run/alma8cpr-dst.sock
-cpr_state_load cpr-transfer mode
-cpr_find_fd pc.bios, id 0 returns 20
-qemu_ram_alloc_shared pc.bios size 262144 max_size 262144 fd 20 host
-0x7fcdc9800000
-cpr_find_fd pc.rom, id 0 returns 19
-qemu_ram_alloc_shared pc.rom size 131072 max_size 131072 fd 19 host
-0x7fcdc9600000
-cpr_find_fd 0000:00:01.0/e1000e.rom, id 0 returns 18
-qemu_ram_alloc_shared 0000:00:01.0/e1000e.rom size 262144 max_size
-262144 fd 18 host 0x7fcdc9400000
-cpr_find_fd 0000:00:02.0/vga.vram, id 0 returns 17
-qemu_ram_alloc_shared 0000:00:02.0/vga.vram size 67108864 max_size
-67108864 fd 17 host 0x7fcd27e00000
-cpr_find_fd 0000:00:02.0/qxl.vrom, id 0 returns 16
-qemu_ram_alloc_shared 0000:00:02.0/qxl.vrom size 8192 max_size 8192
-fd 16 host 0x7fcdc9200000
-cpr_find_fd 0000:00:02.0/qxl.vram, id 0 returns 15
-qemu_ram_alloc_shared 0000:00:02.0/qxl.vram size 67108864 max_size
-67108864 fd 15 host 0x7fcd23c00000
-cpr_find_fd 0000:00:02.0/qxl.rom, id 0 returns 14
-qemu_ram_alloc_shared 0000:00:02.0/qxl.rom size 65536 max_size 65536
-fd 14 host 0x7fcdc8800000
-cpr_find_fd /rom@etc/acpi/tables, id 0 returns 13
-qemu_ram_alloc_shared /rom@etc/acpi/tables size 131072 max_size
-2097152 fd 13 host 0x7fcdc8400000
-cpr_find_fd /rom@etc/table-loader, id 0 returns 11
-qemu_ram_alloc_shared /rom@etc/table-loader size 4096 max_size 65536
-fd 11 host 0x7fcdc8200000
-cpr_find_fd /rom@etc/acpi/rsdp, id 0 returns 10
-qemu_ram_alloc_shared /rom@etc/acpi/rsdp size 4096 max_size 4096 fd
-10 host 0x7fcd3be00000
-Looks like both vga.vram and qxl.vram are being preserved (with the
-same
-addresses), and no incompatible ram blocks are found during migration.
-Sorry, addressed are not the same, of course.  However corresponding
-ram
-blocks do seem to be preserved and initialized.
-So far, I have not reproduced the guest driver failure.
-
-However, I have isolated places where new QEMU improperly writes to
-the qxl memory regions prior to starting the guest, by mmap'ing them
-readonly after cpr:
-
-    qemu_ram_alloc_internal()
-      if (reused && (strstr(name, "qxl") || strstr("name", "vga")))
-          ram_flags |= RAM_READONLY;
-      new_block = qemu_ram_alloc_from_fd(...)
-
-I have attached a draft fix; try it and let me know.
-My console window looks fine before and after cpr, using
--vnc $hostip:0 -vga qxl
-
-- Steve
-Regarding the reproduce: when I launch the buggy version with the same
-options as you, i.e. "-vnc 0.0.0.0:$port -vga qxl", and do cpr-transfer,
-my VNC client silently hangs on the target after a while.  Could it
-happen on your stand as well?
-cpr does not preserve the vnc connection and session.  To test, I specify
-port 0 for the source VM and port 1 for the dest.  When the src vnc goes
-dormant the dest vnc becomes active.
-Sure, I meant that VNC on the dest (on the port 1) works for a while
-after the migration and then hangs, apparently after the guest QXL crash.
-Could you try launching VM with
-"-nographic -device qxl-vga"?  That way VM's serial console is given you
-directly in the shell, so when qxl driver crashes you're still able to
-inspect the kernel messages.
-I have been running like that, but have not reproduced the qxl driver
-crash,
-and I suspect my guest image+kernel is too old.
-Yes, that's probably the case.  But the crash occurs on my Fedora 41
-guest with the 6.11.5-300.fc41.x86_64 kernel, so newer kernels seem to
-be buggy.
-However, once I realized the
-issue was post-cpr modification of qxl memory, I switched my attention
-to the
-fix.
-As for your patch, I can report that it doesn't resolve the issue as it
-is.  But I was able to track down another possible memory corruption
-using your approach with readonly mmap'ing:
-Program terminated with signal SIGSEGV, Segmentation fault.
-#0  init_qxl_ram (d=0x5638996e0e70) at ../hw/display/qxl.c:412
-412         d->ram->magic       = cpu_to_le32(QXL_RAM_MAGIC);
-[Current thread is 1 (Thread 0x7f1a4f83b480 (LWP 229798))]
-(gdb) bt
-#0  init_qxl_ram (d=0x5638996e0e70) at ../hw/display/qxl.c:412
-#1  0x0000563896e7f467 in qxl_realize_common (qxl=0x5638996e0e70,
-errp=0x7ffd3c2b8170) at ../hw/display/qxl.c:2142
-#2  0x0000563896e7fda1 in qxl_realize_primary (dev=0x5638996e0e70,
-errp=0x7ffd3c2b81d0) at ../hw/display/qxl.c:2257
-#3  0x0000563896c7e8f2 in pci_qdev_realize (qdev=0x5638996e0e70,
-errp=0x7ffd3c2b8250) at ../hw/pci/pci.c:2174
-#4  0x00005638970eb54b in device_set_realized (obj=0x5638996e0e70,
-value=true, errp=0x7ffd3c2b84e0) at ../hw/core/qdev.c:494
-#5  0x00005638970f5e14 in property_set_bool (obj=0x5638996e0e70,
-v=0x5638996f3770, name=0x56389759b141 "realized",
-opaque=0x5638987893d0, errp=0x7ffd3c2b84e0)
-      at ../qom/object.c:2374
-#6  0x00005638970f39f8 in object_property_set (obj=0x5638996e0e70,
-name=0x56389759b141 "realized", v=0x5638996f3770, errp=0x7ffd3c2b84e0)
-      at ../qom/object.c:1449
-#7  0x00005638970f8586 in object_property_set_qobject
-(obj=0x5638996e0e70, name=0x56389759b141 "realized",
-value=0x5638996df900, errp=0x7ffd3c2b84e0)
-      at ../qom/qom-qobject.c:28
-#8  0x00005638970f3d8d in object_property_set_bool
-(obj=0x5638996e0e70, name=0x56389759b141 "realized", value=true,
-errp=0x7ffd3c2b84e0)
-      at ../qom/object.c:1519
-#9  0x00005638970eacb0 in qdev_realize (dev=0x5638996e0e70,
-bus=0x563898cf3c20, errp=0x7ffd3c2b84e0) at ../hw/core/qdev.c:276
-#10 0x0000563896dba675 in qdev_device_add_from_qdict
-(opts=0x5638996dfe50, from_json=false, errp=0x7ffd3c2b84e0) at ../
-system/qdev-monitor.c:714
-#11 0x0000563896dba721 in qdev_device_add (opts=0x563898786150,
-errp=0x56389855dc40 <error_fatal>) at ../system/qdev-monitor.c:733
-#12 0x0000563896dc48f1 in device_init_func (opaque=0x0,
-opts=0x563898786150, errp=0x56389855dc40 <error_fatal>) at ../system/
-vl.c:1207
-#13 0x000056389737a6cc in qemu_opts_foreach
-      (list=0x563898427b60 <qemu_device_opts>, func=0x563896dc48ca
-<device_init_func>, opaque=0x0, errp=0x56389855dc40 <error_fatal>)
-      at ../util/qemu-option.c:1135
-#14 0x0000563896dc89b5 in qemu_create_cli_devices () at ../system/
-vl.c:2745
-#15 0x0000563896dc8c00 in qmp_x_exit_preconfig (errp=0x56389855dc40
-<error_fatal>) at ../system/vl.c:2806
-#16 0x0000563896dcb5de in qemu_init (argc=33, argv=0x7ffd3c2b8948)
-at ../system/vl.c:3838
-#17 0x0000563897297323 in main (argc=33, argv=0x7ffd3c2b8948) at ../
-system/main.c:72
-So the attached adjusted version of your patch does seem to help.  At
-least I can't reproduce the crash on my stand.
-Thanks for the stack trace; the calls to SPICE_RING_INIT in init_qxl_ram
-are
-definitely harmful.  Try V2 of the patch, attached, which skips the lines
-of init_qxl_ram that modify guest memory.
-Thanks, your v2 patch does seem to prevent the crash.  Would you re-send
-it to the list as a proper fix?
-Yes.  Was waiting for your confirmation.
-I'm wondering, could it be useful to explicitly mark all the reused
-memory regions readonly upon cpr-transfer, and then make them writable
-back again after the migration is done?  That way we will be segfaulting
-early on instead of debugging tricky memory corruptions.
-It's a useful debugging technique, but changing protection on a large
-memory region
-can be too expensive for production due to TLB shootdowns.
-
-Also, there are cases where writes are performed but the value is
-guaranteed to
-be the same:
-   qxl_post_load()
-     qxl_set_mode()
-       d->rom->mode = cpu_to_le32(modenr);
-The value is the same because mode and shadow_rom.mode were passed in
-vmstate
-from old qemu.
-There're also cases where devices' ROM might be re-initialized.  E.g.
-this segfault occures upon further exploration of RO mapped RAM blocks:
-Program terminated with signal SIGSEGV, Segmentation fault.
-#0  __memmove_avx_unaligned_erms () at 
-../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:664
-664             rep     movsb
-[Current thread is 1 (Thread 0x7f6e7d08b480 (LWP 310379))]
-(gdb) bt
-#0  __memmove_avx_unaligned_erms () at 
-../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:664
-#1  0x000055aa1d030ecd in rom_set_mr (rom=0x55aa200ba380, owner=0x55aa2019ac10, 
-name=0x7fffb8272bc0 "/rom@etc/acpi/tables", ro=true)
-     at ../hw/core/loader.c:1032
-#2  0x000055aa1d031577 in rom_add_blob
-     (name=0x55aa1da51f13 "etc/acpi/tables", blob=0x55aa208a1070, len=131072, max_len=2097152, 
-addr=18446744073709551615, fw_file_name=0x55aa1da51f13 "etc/acpi/tables", 
-fw_callback=0x55aa1d441f59 <acpi_build_update>, callback_opaque=0x55aa20ff0010, as=0x0, 
-read_only=true) at ../hw/core/loader.c:1147
-#3  0x000055aa1cfd788d in acpi_add_rom_blob
-     (update=0x55aa1d441f59 <acpi_build_update>, opaque=0x55aa20ff0010, 
-blob=0x55aa1fc9aa00, name=0x55aa1da51f13 "etc/acpi/tables") at ../hw/acpi/utils.c:46
-#4  0x000055aa1d44213f in acpi_setup () at ../hw/i386/acpi-build.c:2720
-#5  0x000055aa1d434199 in pc_machine_done (notifier=0x55aa1ff15050, data=0x0) 
-at ../hw/i386/pc.c:638
-#6  0x000055aa1d876845 in notifier_list_notify (list=0x55aa1ea25c10 
-<machine_init_done_notifiers>, data=0x0) at ../util/notify.c:39
-#7  0x000055aa1d039ee5 in qdev_machine_creation_done () at 
-../hw/core/machine.c:1749
-#8  0x000055aa1d2c7b3e in qemu_machine_creation_done (errp=0x55aa1ea5cc40 
-<error_fatal>) at ../system/vl.c:2779
-#9  0x000055aa1d2c7c7d in qmp_x_exit_preconfig (errp=0x55aa1ea5cc40 
-<error_fatal>) at ../system/vl.c:2807
-#10 0x000055aa1d2ca64f in qemu_init (argc=35, argv=0x7fffb82730e8) at 
-../system/vl.c:3838
-#11 0x000055aa1d79638c in main (argc=35, argv=0x7fffb82730e8) at 
-../system/main.c:72
-I'm not sure whether ACPI tables ROM in particular is rewritten with the
-same content, but there might be cases where ROM can be read from file
-system upon initialization.  That is undesirable as guest kernel
-certainly won't be too happy about sudden change of the device's ROM
-content.
-
-So the issue we're dealing with here is any unwanted memory related
-device initialization upon cpr.
-
-For now the only thing that comes to my mind is to make a test where we
-put as many devices as we can into a VM, make ram blocks RO upon cpr
-(and remap them as RW later after migration is done, if needed), and
-catch any unwanted memory violations.  As Den suggested, we might
-consider adding that behaviour as a separate non-default option (or
-"migrate" command flag specific to cpr-transfer), which would only be
-used in the testing.
-I'll look into adding an option, but there may be too many false positives,
-such as the qxl_set_mode case above.  And the maintainers may object to me
-eliminating the false positives by adding more CPR_IN tests, due to gratuitous
-(from their POV) ugliness.
-
-But I will use the technique to look for more write violations.
-Andrey
-No way. ACPI with the source must be used in the same way as BIOSes
-and optional ROMs.
-Yup, its a bug.  Will fix.
-
-- Steve
-
-see
-1741380954-341079-1-git-send-email-steven.sistare@oracle.com
-/">https://lore.kernel.org/qemu-devel/
-1741380954-341079-1-git-send-email-steven.sistare@oracle.com
-/
-- Steve
-
-On 3/6/2025 11:13 AM, Steven Sistare wrote:
-On 3/6/2025 10:52 AM, Denis V. Lunev wrote:
-On 3/6/25 16:16, Andrey Drobyshev wrote:
-On 3/5/25 11:19 PM, Steven Sistare wrote:
-On 3/5/2025 11:50 AM, Andrey Drobyshev wrote:
-On 3/4/25 9:05 PM, Steven Sistare wrote:
-On 2/28/2025 1:37 PM, Andrey Drobyshev wrote:
-On 2/28/25 8:35 PM, Andrey Drobyshev wrote:
-On 2/28/25 8:20 PM, Steven Sistare wrote:
-On 2/28/2025 1:13 PM, Steven Sistare wrote:
-On 2/28/2025 12:39 PM, Andrey Drobyshev wrote:
-Hi all,
-
-We've been experimenting with cpr-transfer migration mode recently
-and
-have discovered the following issue with the guest QXL driver:
-
-Run migration source:
-EMULATOR=/path/to/emulator
-ROOTFS=/path/to/image
-QMPSOCK=/var/run/alma8qmp-src.sock
-
-$EMULATOR -enable-kvm \
-        -machine q35 \
-        -cpu host -smp 2 -m 2G \
-        -object memory-backend-file,id=ram0,size=2G,mem-path=/
-dev/shm/
-ram0,share=on\
-        -machine memory-backend=ram0 \
-        -machine aux-ram-share=on \
-        -drive file=$ROOTFS,media=disk,if=virtio \
-        -qmp unix:$QMPSOCK,server=on,wait=off \
-        -nographic \
-        -device qxl-vga
-Run migration target:
-EMULATOR=/path/to/emulator
-ROOTFS=/path/to/image
-QMPSOCK=/var/run/alma8qmp-dst.sock
-$EMULATOR -enable-kvm \
-        -machine q35 \
-        -cpu host -smp 2 -m 2G \
-        -object memory-backend-file,id=ram0,size=2G,mem-path=/
-dev/shm/
-ram0,share=on\
-        -machine memory-backend=ram0 \
-        -machine aux-ram-share=on \
-        -drive file=$ROOTFS,media=disk,if=virtio \
-        -qmp unix:$QMPSOCK,server=on,wait=off \
-        -nographic \
-        -device qxl-vga \
-        -incoming tcp:0:44444 \
-        -incoming '{"channel-type": "cpr", "addr": { "transport":
-"socket", "type": "unix", "path": "/var/run/alma8cpr-dst.sock"}}'
-Launch the migration:
-QMPSHELL=/root/src/qemu/master/scripts/qmp/qmp-shell
-QMPSOCK=/var/run/alma8qmp-src.sock
-
-$QMPSHELL -p $QMPSOCK <<EOF
-        migrate-set-parameters mode=cpr-transfer
-        migrate channels=[{"channel-type":"main","addr":
-{"transport":"socket","type":"inet","host":"0","port":"44444"}},
-{"channel-type":"cpr","addr":
-{"transport":"socket","type":"unix","path":"/var/run/alma8cpr-
-dst.sock"}}]
-EOF
-Then, after a while, QXL guest driver on target crashes spewing the
-following messages:
-[   73.962002] [TTM] Buffer eviction failed
-[   73.962072] qxl 0000:00:02.0: object_init failed for (3149824,
-0x00000001)
-[   73.962081] [drm:qxl_alloc_bo_reserved [qxl]] *ERROR* failed to
-allocate VRAM BO
-That seems to be a known kernel QXL driver bug:
-https://lore.kernel.org/all/20220907094423.93581-1-
-min_halo@163.com/T/
-https://lore.kernel.org/lkml/ZTgydqRlK6WX_b29@eldamar.lan/
-(the latter discussion contains that reproduce script which
-speeds up
-the crash in the guest):
-#!/bin/bash
-
-chvt 3
-
-for j in $(seq 80); do
-            echo "$(date) starting round $j"
-            if [ "$(journalctl --boot | grep "failed to allocate
-VRAM
-BO")" != "" ]; then
-                    echo "bug was reproduced after $j tries"
-                    exit 1
-            fi
-            for i in $(seq 100); do
-                    dmesg > /dev/tty3
-            done
-done
-
-echo "bug could not be reproduced"
-exit 0
-The bug itself seems to remain unfixed, as I was able to reproduce
-that
-with Fedora 41 guest, as well as AlmaLinux 8 guest. However our
-cpr-transfer code also seems to be buggy as it triggers the crash -
-without the cpr-transfer migration the above reproduce doesn't
-lead to
-crash on the source VM.
-
-I suspect that, as cpr-transfer doesn't migrate the guest
-memory, but
-rather passes it through the memory backend object, our code might
-somehow corrupt the VRAM.  However, I wasn't able to trace the
-corruption so far.
-
-Could somebody help the investigation and take a look into
-this?  Any
-suggestions would be appreciated.  Thanks!
-Possibly some memory region created by qxl is not being preserved.
-Try adding these traces to see what is preserved:
-
--trace enable='*cpr*'
--trace enable='*ram_alloc*'
-Also try adding this patch to see if it flags any ram blocks as not
-compatible with cpr.  A message is printed at migration start time.
-   Â
-https://lore.kernel.org/qemu-devel/1740667681-257312-1-git-send-
-email-
-steven.sistare@oracle.com/
-
-- Steve
-With the traces enabled + the "migration: ram block cpr blockers"
-patch
-applied:
-
-Source:
-cpr_find_fd pc.bios, id 0 returns -1
-cpr_save_fd pc.bios, id 0, fd 22
-qemu_ram_alloc_shared pc.bios size 262144 max_size 262144 fd 22 host
-0x7fec18e00000
-cpr_find_fd pc.rom, id 0 returns -1
-cpr_save_fd pc.rom, id 0, fd 23
-qemu_ram_alloc_shared pc.rom size 131072 max_size 131072 fd 23 host
-0x7fec18c00000
-cpr_find_fd 0000:00:01.0/e1000e.rom, id 0 returns -1
-cpr_save_fd 0000:00:01.0/e1000e.rom, id 0, fd 24
-qemu_ram_alloc_shared 0000:00:01.0/e1000e.rom size 262144 max_size
-262144 fd 24 host 0x7fec18a00000
-cpr_find_fd 0000:00:02.0/vga.vram, id 0 returns -1
-cpr_save_fd 0000:00:02.0/vga.vram, id 0, fd 25
-qemu_ram_alloc_shared 0000:00:02.0/vga.vram size 67108864 max_size
-67108864 fd 25 host 0x7feb77e00000
-cpr_find_fd 0000:00:02.0/qxl.vrom, id 0 returns -1
-cpr_save_fd 0000:00:02.0/qxl.vrom, id 0, fd 27
-qemu_ram_alloc_shared 0000:00:02.0/qxl.vrom size 8192 max_size 8192
-fd 27 host 0x7fec18800000
-cpr_find_fd 0000:00:02.0/qxl.vram, id 0 returns -1
-cpr_save_fd 0000:00:02.0/qxl.vram, id 0, fd 28
-qemu_ram_alloc_shared 0000:00:02.0/qxl.vram size 67108864 max_size
-67108864 fd 28 host 0x7feb73c00000
-cpr_find_fd 0000:00:02.0/qxl.rom, id 0 returns -1
-cpr_save_fd 0000:00:02.0/qxl.rom, id 0, fd 34
-qemu_ram_alloc_shared 0000:00:02.0/qxl.rom size 65536 max_size 65536
-fd 34 host 0x7fec18600000
-cpr_find_fd /rom@etc/acpi/tables, id 0 returns -1
-cpr_save_fd /rom@etc/acpi/tables, id 0, fd 35
-qemu_ram_alloc_shared /rom@etc/acpi/tables size 131072 max_size
-2097152 fd 35 host 0x7fec18200000
-cpr_find_fd /rom@etc/table-loader, id 0 returns -1
-cpr_save_fd /rom@etc/table-loader, id 0, fd 36
-qemu_ram_alloc_shared /rom@etc/table-loader size 4096 max_size 65536
-fd 36 host 0x7feb8b600000
-cpr_find_fd /rom@etc/acpi/rsdp, id 0 returns -1
-cpr_save_fd /rom@etc/acpi/rsdp, id 0, fd 37
-qemu_ram_alloc_shared /rom@etc/acpi/rsdp size 4096 max_size 4096 fd
-37 host 0x7feb8b400000
-
-cpr_state_save cpr-transfer mode
-cpr_transfer_output /var/run/alma8cpr-dst.sock
-Target:
-cpr_transfer_input /var/run/alma8cpr-dst.sock
-cpr_state_load cpr-transfer mode
-cpr_find_fd pc.bios, id 0 returns 20
-qemu_ram_alloc_shared pc.bios size 262144 max_size 262144 fd 20 host
-0x7fcdc9800000
-cpr_find_fd pc.rom, id 0 returns 19
-qemu_ram_alloc_shared pc.rom size 131072 max_size 131072 fd 19 host
-0x7fcdc9600000
-cpr_find_fd 0000:00:01.0/e1000e.rom, id 0 returns 18
-qemu_ram_alloc_shared 0000:00:01.0/e1000e.rom size 262144 max_size
-262144 fd 18 host 0x7fcdc9400000
-cpr_find_fd 0000:00:02.0/vga.vram, id 0 returns 17
-qemu_ram_alloc_shared 0000:00:02.0/vga.vram size 67108864 max_size
-67108864 fd 17 host 0x7fcd27e00000
-cpr_find_fd 0000:00:02.0/qxl.vrom, id 0 returns 16
-qemu_ram_alloc_shared 0000:00:02.0/qxl.vrom size 8192 max_size 8192
-fd 16 host 0x7fcdc9200000
-cpr_find_fd 0000:00:02.0/qxl.vram, id 0 returns 15
-qemu_ram_alloc_shared 0000:00:02.0/qxl.vram size 67108864 max_size
-67108864 fd 15 host 0x7fcd23c00000
-cpr_find_fd 0000:00:02.0/qxl.rom, id 0 returns 14
-qemu_ram_alloc_shared 0000:00:02.0/qxl.rom size 65536 max_size 65536
-fd 14 host 0x7fcdc8800000
-cpr_find_fd /rom@etc/acpi/tables, id 0 returns 13
-qemu_ram_alloc_shared /rom@etc/acpi/tables size 131072 max_size
-2097152 fd 13 host 0x7fcdc8400000
-cpr_find_fd /rom@etc/table-loader, id 0 returns 11
-qemu_ram_alloc_shared /rom@etc/table-loader size 4096 max_size 65536
-fd 11 host 0x7fcdc8200000
-cpr_find_fd /rom@etc/acpi/rsdp, id 0 returns 10
-qemu_ram_alloc_shared /rom@etc/acpi/rsdp size 4096 max_size 4096 fd
-10 host 0x7fcd3be00000
-Looks like both vga.vram and qxl.vram are being preserved (with the
-same
-addresses), and no incompatible ram blocks are found during migration.
-Sorry, addressed are not the same, of course.  However corresponding
-ram
-blocks do seem to be preserved and initialized.
-So far, I have not reproduced the guest driver failure.
-
-However, I have isolated places where new QEMU improperly writes to
-the qxl memory regions prior to starting the guest, by mmap'ing them
-readonly after cpr:
-
-    qemu_ram_alloc_internal()
-      if (reused && (strstr(name, "qxl") || strstr("name", "vga")))
-          ram_flags |= RAM_READONLY;
-      new_block = qemu_ram_alloc_from_fd(...)
-
-I have attached a draft fix; try it and let me know.
-My console window looks fine before and after cpr, using
--vnc $hostip:0 -vga qxl
-
-- Steve
-Regarding the reproduce: when I launch the buggy version with the same
-options as you, i.e. "-vnc 0.0.0.0:$port -vga qxl", and do cpr-transfer,
-my VNC client silently hangs on the target after a while.  Could it
-happen on your stand as well?
-cpr does not preserve the vnc connection and session.  To test, I specify
-port 0 for the source VM and port 1 for the dest.  When the src vnc goes
-dormant the dest vnc becomes active.
-Sure, I meant that VNC on the dest (on the port 1) works for a while
-after the migration and then hangs, apparently after the guest QXL crash.
-Could you try launching VM with
-"-nographic -device qxl-vga"?  That way VM's serial console is given you
-directly in the shell, so when qxl driver crashes you're still able to
-inspect the kernel messages.
-I have been running like that, but have not reproduced the qxl driver
-crash,
-and I suspect my guest image+kernel is too old.
-Yes, that's probably the case.  But the crash occurs on my Fedora 41
-guest with the 6.11.5-300.fc41.x86_64 kernel, so newer kernels seem to
-be buggy.
-However, once I realized the
-issue was post-cpr modification of qxl memory, I switched my attention
-to the
-fix.
-As for your patch, I can report that it doesn't resolve the issue as it
-is.  But I was able to track down another possible memory corruption
-using your approach with readonly mmap'ing:
-Program terminated with signal SIGSEGV, Segmentation fault.
-#0  init_qxl_ram (d=0x5638996e0e70) at ../hw/display/qxl.c:412
-412         d->ram->magic       = cpu_to_le32(QXL_RAM_MAGIC);
-[Current thread is 1 (Thread 0x7f1a4f83b480 (LWP 229798))]
-(gdb) bt
-#0  init_qxl_ram (d=0x5638996e0e70) at ../hw/display/qxl.c:412
-#1  0x0000563896e7f467 in qxl_realize_common (qxl=0x5638996e0e70,
-errp=0x7ffd3c2b8170) at ../hw/display/qxl.c:2142
-#2  0x0000563896e7fda1 in qxl_realize_primary (dev=0x5638996e0e70,
-errp=0x7ffd3c2b81d0) at ../hw/display/qxl.c:2257
-#3  0x0000563896c7e8f2 in pci_qdev_realize (qdev=0x5638996e0e70,
-errp=0x7ffd3c2b8250) at ../hw/pci/pci.c:2174
-#4  0x00005638970eb54b in device_set_realized (obj=0x5638996e0e70,
-value=true, errp=0x7ffd3c2b84e0) at ../hw/core/qdev.c:494
-#5  0x00005638970f5e14 in property_set_bool (obj=0x5638996e0e70,
-v=0x5638996f3770, name=0x56389759b141 "realized",
-opaque=0x5638987893d0, errp=0x7ffd3c2b84e0)
-      at ../qom/object.c:2374
-#6  0x00005638970f39f8 in object_property_set (obj=0x5638996e0e70,
-name=0x56389759b141 "realized", v=0x5638996f3770, errp=0x7ffd3c2b84e0)
-      at ../qom/object.c:1449
-#7  0x00005638970f8586 in object_property_set_qobject
-(obj=0x5638996e0e70, name=0x56389759b141 "realized",
-value=0x5638996df900, errp=0x7ffd3c2b84e0)
-      at ../qom/qom-qobject.c:28
-#8  0x00005638970f3d8d in object_property_set_bool
-(obj=0x5638996e0e70, name=0x56389759b141 "realized", value=true,
-errp=0x7ffd3c2b84e0)
-      at ../qom/object.c:1519
-#9  0x00005638970eacb0 in qdev_realize (dev=0x5638996e0e70,
-bus=0x563898cf3c20, errp=0x7ffd3c2b84e0) at ../hw/core/qdev.c:276
-#10 0x0000563896dba675 in qdev_device_add_from_qdict
-(opts=0x5638996dfe50, from_json=false, errp=0x7ffd3c2b84e0) at ../
-system/qdev-monitor.c:714
-#11 0x0000563896dba721 in qdev_device_add (opts=0x563898786150,
-errp=0x56389855dc40 <error_fatal>) at ../system/qdev-monitor.c:733
-#12 0x0000563896dc48f1 in device_init_func (opaque=0x0,
-opts=0x563898786150, errp=0x56389855dc40 <error_fatal>) at ../system/
-vl.c:1207
-#13 0x000056389737a6cc in qemu_opts_foreach
-      (list=0x563898427b60 <qemu_device_opts>, func=0x563896dc48ca
-<device_init_func>, opaque=0x0, errp=0x56389855dc40 <error_fatal>)
-      at ../util/qemu-option.c:1135
-#14 0x0000563896dc89b5 in qemu_create_cli_devices () at ../system/
-vl.c:2745
-#15 0x0000563896dc8c00 in qmp_x_exit_preconfig (errp=0x56389855dc40
-<error_fatal>) at ../system/vl.c:2806
-#16 0x0000563896dcb5de in qemu_init (argc=33, argv=0x7ffd3c2b8948)
-at ../system/vl.c:3838
-#17 0x0000563897297323 in main (argc=33, argv=0x7ffd3c2b8948) at ../
-system/main.c:72
-So the attached adjusted version of your patch does seem to help.  At
-least I can't reproduce the crash on my stand.
-Thanks for the stack trace; the calls to SPICE_RING_INIT in init_qxl_ram
-are
-definitely harmful.  Try V2 of the patch, attached, which skips the lines
-of init_qxl_ram that modify guest memory.
-Thanks, your v2 patch does seem to prevent the crash.  Would you re-send
-it to the list as a proper fix?
-Yes.  Was waiting for your confirmation.
-I'm wondering, could it be useful to explicitly mark all the reused
-memory regions readonly upon cpr-transfer, and then make them writable
-back again after the migration is done?  That way we will be segfaulting
-early on instead of debugging tricky memory corruptions.
-It's a useful debugging technique, but changing protection on a large
-memory region
-can be too expensive for production due to TLB shootdowns.
-
-Also, there are cases where writes are performed but the value is
-guaranteed to
-be the same:
-   qxl_post_load()
-     qxl_set_mode()
-       d->rom->mode = cpu_to_le32(modenr);
-The value is the same because mode and shadow_rom.mode were passed in
-vmstate
-from old qemu.
-There're also cases where devices' ROM might be re-initialized.  E.g.
-this segfault occures upon further exploration of RO mapped RAM blocks:
-Program terminated with signal SIGSEGV, Segmentation fault.
-#0  __memmove_avx_unaligned_erms () at 
-../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:664
-664             rep     movsb
-[Current thread is 1 (Thread 0x7f6e7d08b480 (LWP 310379))]
-(gdb) bt
-#0  __memmove_avx_unaligned_erms () at 
-../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:664
-#1  0x000055aa1d030ecd in rom_set_mr (rom=0x55aa200ba380, owner=0x55aa2019ac10, 
-name=0x7fffb8272bc0 "/rom@etc/acpi/tables", ro=true)
-     at ../hw/core/loader.c:1032
-#2  0x000055aa1d031577 in rom_add_blob
-     (name=0x55aa1da51f13 "etc/acpi/tables", blob=0x55aa208a1070, len=131072, max_len=2097152, 
-addr=18446744073709551615, fw_file_name=0x55aa1da51f13 "etc/acpi/tables", 
-fw_callback=0x55aa1d441f59 <acpi_build_update>, callback_opaque=0x55aa20ff0010, as=0x0, 
-read_only=true) at ../hw/core/loader.c:1147
-#3  0x000055aa1cfd788d in acpi_add_rom_blob
-     (update=0x55aa1d441f59 <acpi_build_update>, opaque=0x55aa20ff0010, 
-blob=0x55aa1fc9aa00, name=0x55aa1da51f13 "etc/acpi/tables") at ../hw/acpi/utils.c:46
-#4  0x000055aa1d44213f in acpi_setup () at ../hw/i386/acpi-build.c:2720
-#5  0x000055aa1d434199 in pc_machine_done (notifier=0x55aa1ff15050, data=0x0) 
-at ../hw/i386/pc.c:638
-#6  0x000055aa1d876845 in notifier_list_notify (list=0x55aa1ea25c10 
-<machine_init_done_notifiers>, data=0x0) at ../util/notify.c:39
-#7  0x000055aa1d039ee5 in qdev_machine_creation_done () at 
-../hw/core/machine.c:1749
-#8  0x000055aa1d2c7b3e in qemu_machine_creation_done (errp=0x55aa1ea5cc40 
-<error_fatal>) at ../system/vl.c:2779
-#9  0x000055aa1d2c7c7d in qmp_x_exit_preconfig (errp=0x55aa1ea5cc40 
-<error_fatal>) at ../system/vl.c:2807
-#10 0x000055aa1d2ca64f in qemu_init (argc=35, argv=0x7fffb82730e8) at 
-../system/vl.c:3838
-#11 0x000055aa1d79638c in main (argc=35, argv=0x7fffb82730e8) at 
-../system/main.c:72
-I'm not sure whether ACPI tables ROM in particular is rewritten with the
-same content, but there might be cases where ROM can be read from file
-system upon initialization.  That is undesirable as guest kernel
-certainly won't be too happy about sudden change of the device's ROM
-content.
-
-So the issue we're dealing with here is any unwanted memory related
-device initialization upon cpr.
-
-For now the only thing that comes to my mind is to make a test where we
-put as many devices as we can into a VM, make ram blocks RO upon cpr
-(and remap them as RW later after migration is done, if needed), and
-catch any unwanted memory violations.  As Den suggested, we might
-consider adding that behaviour as a separate non-default option (or
-"migrate" command flag specific to cpr-transfer), which would only be
-used in the testing.
-I'll look into adding an option, but there may be too many false positives,
-such as the qxl_set_mode case above.  And the maintainers may object to me
-eliminating the false positives by adding more CPR_IN tests, due to gratuitous
-(from their POV) ugliness.
-
-But I will use the technique to look for more write violations.
-Andrey
-No way. ACPI with the source must be used in the same way as BIOSes
-and optional ROMs.
-Yup, its a bug.  Will fix.
-
-- Steve
-
diff --git a/results/classifier/009/debug/53568181 b/results/classifier/009/debug/53568181
deleted file mode 100644
index 9bfb773aa..000000000
--- a/results/classifier/009/debug/53568181
+++ /dev/null
@@ -1,88 +0,0 @@
-debug: 0.968
-permissions: 0.965
-performance: 0.948
-semantic: 0.943
-graphic: 0.940
-PID: 0.938
-device: 0.936
-vnc: 0.935
-network: 0.925
-other: 0.921
-KVM: 0.917
-files: 0.890
-boot: 0.876
-socket: 0.875
-
-[BUG] x86/PAT handling severely crippled AMD-V SVM KVM performance
-
-Hi, I maintain an out-of-tree 3D APIs pass-through QEMU device models at
-https://github.com/kjliew/qemu-3dfx
-that provide 3D acceleration for legacy
-32-bit Windows guests (Win98SE, WinME, Win2k and WinXP) with the focus on
-playing old legacy games from 1996-2003. It currently supports the now-defunct
-3Dfx propriety API called Glide and an alternative OpenGL pass-through based on
-MESA implementation.
-
-The basic concept of both implementations create memory-mapped virtual
-interfaces consist of host/guest shared memory with guest-push model instead of
-a more common host-pull model for typical QEMU device model implementation.
-Guest uses shared memory as FIFOs for drawing commands and data to bulk up the
-operations until serialization event that flushes the FIFOs into host. This
-achieves extremely good performance since virtual CPUs are fast with hardware
-acceleration (Intel VT/AMD-V) and reduces the overhead of frequent VMEXITs to
-service the device emulation. Both implementations work on Windows 10 with WHPX
-and HAXM accelerators as well as KVM in Linux.
-
-On Windows 10, QEMU WHPX implementation does not sync MSR_IA32_PAT during
-host/guest states sync. There is no visibility into the closed-source WHPX on
-how things are managed behind the scene, but from measuring performance figures
-I can conclude that it didn't handle the MSR_IA32_PAT correctly for both Intel
-and AMD. Call this fair enough, if you will, it didn't flag any concerns, in
-fact games such as Quake2 and Quake3 were still within playable frame rate of
-40~60FPS on Win2k/XP guest. Until the same games were run on Win98/ME guest and
-the frame rate blew off the roof (300~500FPS) on the same CPU and GPU. In fact,
-the later seemed to be more inlined with runnng the games bare-metal with vsync
-off.
-
-On Linux (at the time of writing kernel 5.6.7/Mesa 20.0), the difference
-prevailed. Intel CPUs (and it so happened that I was on laptop with Intel GPU),
-the VMX-based kvm_intel got it right while SVM-based kvm_amd did not.
-To put this in simple exaggeration, an aging Core i3-4010U/HD Graphics 4400
-(Haswell GT2) exhibited an insane performance in Quake2/Quake3 timedemos that
-totally crushed more recent AMD Ryzen 2500U APU/Vega 8 Graphics and AMD
-FX8300/NVIDIA GT730 on desktop. Simply unbelievable!
-
-It turned out that there was something to do with AMD-V NPT. By loading kvm_amd
-with npt=0, AMD Ryzen APU and FX8300 regained a huge performance leap. However,
-AMD NPT issue with KVM was supposedly fixed in 2017 kernel commits. NPT=0 would
-actually incur performance loss for VM due to intervention required by
-hypervisors to maintain the shadow page tables.  Finally, I was able to find the
-pointer that pointed to MSR_IA32_PAT register. By updating the MSR_IA32_PAT to
-0x0606xxxx0606xxxxULL, AMD CPUs now regain their rightful performance without
-taking the hit of NPT=0 for Linux KVM. Taking the same solution into Windows,
-both Intel and AMD CPUs no longer require Win98/ME guest to unleash the full
-performance potentials and performance figures based on games measured on WHPX
-were not very far behind Linux KVM.
-
-So I guess the problem lies in host/guest shared memory regions mapped as
-uncacheable from virtual CPU perspective. As virtual CPUs now completely execute
-in hardware context with x86 hardware virtualiztion extensions, the cacheability
-of memory types would severely impact the performance on guests. WHPX didn't
-handle it for both Intel EPT and AMD NPT, but KVM seems to do it right for Intel
-EPT. I don't have the correct fix for QEMU. But what I can do for my 3D APIs
-pass-through device models is to implement host-side hooks to reprogram and
-restore MSR_IA32_PAT upon activation/deactivation of the 3D APIs. Perhaps there
-is also a better solution of having the proper kernel drivers for virtual
-interfaces to manage the memory types of host/guest shared memory in kernel
-space, but to do that and the needs of Microsoft tools/DDKs, I will just forget
-it. The guest stubs uses the same kernel drivers included in 3Dfx drivers for
-memory mapping and the virtual interfaces remain driver-less from Windows OS
-perspective. Considering the current state of halting progress for QEMU native
-virgil3D to support Windows OS, I am just being pragmatic. I understand that
-QEMU virgil3D will eventually bring 3D acceleration for Windows guests, but I do
-not expect anything to support legacy 32-bit Windows OSes which have out-grown
-their commercial usefulness.
-
-Regards,
-KJ Liew
-
diff --git a/results/classifier/009/debug/64571620 b/results/classifier/009/debug/64571620
deleted file mode 100644
index 1de1160e2..000000000
--- a/results/classifier/009/debug/64571620
+++ /dev/null
@@ -1,795 +0,0 @@
-debug: 0.927
-other: 0.922
-semantic: 0.903
-permissions: 0.902
-device: 0.899
-performance: 0.897
-graphic: 0.897
-PID: 0.887
-boot: 0.879
-KVM: 0.867
-files: 0.855
-socket: 0.855
-network: 0.853
-vnc: 0.819
-
-[BUG] Migration hv_time rollback
-
-Hi,
-
-We are experiencing timestamp rollbacks during live-migration of
-Windows 10 guests with the following qemu configuration (linux 5.4.46
-and qemu master):
-```
-$ qemu-system-x86_64 -enable-kvm -cpu host,kvm=off,hv_time [...]
-```
-
-I have tracked the bug to the fact that `kvmclock` is not exposed and
-disabled from qemu PoV but is in fact used by `hv-time` (in KVM).
-
-I think we should enable the `kvmclock` (qemu device) if `hv-time` is
-present and add Hyper-V support for the `kvmclock_current_nsec`
-function.
-
-I'm asking for advice because I am unsure this is the _right_ approach
-and how to keep migration compatibility between qemu versions.
-
-Thank you all,
-
--- 
-Antoine 'xdbob' Damhet
-signature.asc
-Description:
-PGP signature
-
-cc'ing in Vitaly who knows about the hv stuff.
-
-* Antoine Damhet (antoine.damhet@blade-group.com) wrote:
->
-Hi,
->
->
-We are experiencing timestamp rollbacks during live-migration of
->
-Windows 10 guests with the following qemu configuration (linux 5.4.46
->
-and qemu master):
->
-```
->
-$ qemu-system-x86_64 -enable-kvm -cpu host,kvm=off,hv_time [...]
->
-```
-How big a jump are you seeing, and how did you notice it in the guest?
-
-Dave
-
->
-I have tracked the bug to the fact that `kvmclock` is not exposed and
->
-disabled from qemu PoV but is in fact used by `hv-time` (in KVM).
->
->
-I think we should enable the `kvmclock` (qemu device) if `hv-time` is
->
-present and add Hyper-V support for the `kvmclock_current_nsec`
->
-function.
->
->
-I'm asking for advice because I am unsure this is the _right_ approach
->
-and how to keep migration compatibility between qemu versions.
->
->
-Thank you all,
->
->
---
->
-Antoine 'xdbob' Damhet
--- 
-Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
-
-"Dr. David Alan Gilbert" <dgilbert@redhat.com> writes:
-
->
-cc'ing in Vitaly who knows about the hv stuff.
->
-cc'ing Marcelo who knows about clocksources :-)
-
->
-* Antoine Damhet (antoine.damhet@blade-group.com) wrote:
->
-> Hi,
->
->
->
-> We are experiencing timestamp rollbacks during live-migration of
->
-> Windows 10 guests
-Are you migrating to the same hardware (with the same TSC frequency)? Is
-TSC used as the clocksource on the host?
-
->
->  with the following qemu configuration (linux 5.4.46
->
-> and qemu master):
->
-> ```
->
-> $ qemu-system-x86_64 -enable-kvm -cpu host,kvm=off,hv_time [...]
->
-> ```
-Out of pure curiosity, what's the purpose of doing 'kvm=off'? Windows is
-not going to check for KVM identification anyway so we pretend we're
-Hyper-V. 
-
-Also, have you tried adding more Hyper-V enlightenments? 
-
->
->
-How big a jump are you seeing, and how did you notice it in the guest?
->
->
-Dave
->
->
-> I have tracked the bug to the fact that `kvmclock` is not exposed and
->
-> disabled from qemu PoV but is in fact used by `hv-time` (in KVM).
->
->
->
-> I think we should enable the `kvmclock` (qemu device) if `hv-time` is
->
-> present and add Hyper-V support for the `kvmclock_current_nsec`
->
-> function.
-AFAICT kvmclock_current_nsec() checks whether kvmclock was enabled by
-the guest:
-
-   if (!(env->system_time_msr & 1ULL)) {
-        /* KVM clock not active */
-        return 0;
-    }
-
-and this is (and way) always false for Windows guests.
-
->
->
->
-> I'm asking for advice because I am unsure this is the _right_ approach
->
-> and how to keep migration compatibility between qemu versions.
->
->
->
-> Thank you all,
->
->
->
-> --
->
-> Antoine 'xdbob' Damhet
--- 
-Vitaly
-
-On Wed, Sep 16, 2020 at 01:59:43PM +0200, Vitaly Kuznetsov wrote:
->
-"Dr. David Alan Gilbert" <dgilbert@redhat.com> writes:
->
->
-> cc'ing in Vitaly who knows about the hv stuff.
->
->
->
->
-cc'ing Marcelo who knows about clocksources :-)
->
->
-> * Antoine Damhet (antoine.damhet@blade-group.com) wrote:
->
->> Hi,
->
->>
->
->> We are experiencing timestamp rollbacks during live-migration of
->
->> Windows 10 guests
->
->
-Are you migrating to the same hardware (with the same TSC frequency)? Is
->
-TSC used as the clocksource on the host?
-Yes we are migrating to the exact same hardware. And yes TSC is used as
-a clocksource in the host (but the bug is still happening with `hpet` as
-a clocksource).
-
->
->
->>  with the following qemu configuration (linux 5.4.46
->
->> and qemu master):
->
->> ```
->
->> $ qemu-system-x86_64 -enable-kvm -cpu host,kvm=off,hv_time [...]
->
->> ```
->
->
-Out of pure curiosity, what's the purpose of doing 'kvm=off'? Windows is
->
-not going to check for KVM identification anyway so we pretend we're
->
-Hyper-V.
-Some softwares explicitly checks for the presence of KVM and then crash
-if they find it in CPUID :/
-
->
->
-Also, have you tried adding more Hyper-V enlightenments?
-Yes, I published a stripped-down command-line for a minimal reproducer
-but even `hv-frequencies` and `hv-reenlightenment` don't help.
-
->
->
->
->
-> How big a jump are you seeing, and how did you notice it in the guest?
->
->
->
-> Dave
->
->
->
->> I have tracked the bug to the fact that `kvmclock` is not exposed and
->
->> disabled from qemu PoV but is in fact used by `hv-time` (in KVM).
->
->>
->
->> I think we should enable the `kvmclock` (qemu device) if `hv-time` is
->
->> present and add Hyper-V support for the `kvmclock_current_nsec`
->
->> function.
->
->
-AFAICT kvmclock_current_nsec() checks whether kvmclock was enabled by
->
-the guest:
->
->
-if (!(env->system_time_msr & 1ULL)) {
->
-/* KVM clock not active */
->
-return 0;
->
-}
->
->
-and this is (and way) always false for Windows guests.
-Hooo, I missed this piece. When is `clock_is_reliable` expected to be
-false ? Because if it is I still think we should be able to query at
-least `HV_X64_MSR_REFERENCE_TSC`
-
->
->
->>
->
->> I'm asking for advice because I am unsure this is the _right_ approach
->
->> and how to keep migration compatibility between qemu versions.
->
->>
->
->> Thank you all,
->
->>
->
->> --
->
->> Antoine 'xdbob' Damhet
->
->
---
->
-Vitaly
->
--- 
-Antoine 'xdbob' Damhet
-signature.asc
-Description:
-PGP signature
-
-On Wed, Sep 16, 2020 at 12:29:56PM +0100, Dr. David Alan Gilbert wrote:
->
-cc'ing in Vitaly who knows about the hv stuff.
-Thanks
-
->
->
-* Antoine Damhet (antoine.damhet@blade-group.com) wrote:
->
-> Hi,
->
->
->
-> We are experiencing timestamp rollbacks during live-migration of
->
-> Windows 10 guests with the following qemu configuration (linux 5.4.46
->
-> and qemu master):
->
-> ```
->
-> $ qemu-system-x86_64 -enable-kvm -cpu host,kvm=off,hv_time [...]
->
-> ```
->
->
-How big a jump are you seeing, and how did you notice it in the guest?
-I'm seeing jumps of about the guest uptime (indicating a reset of the
-counter). It's expected because we won't call `KVM_SET_CLOCK` to
-restore any value.
-
-We first noticed it because after some migrations `dwm.exe` crashes with
-the "(NTSTATUS) 0x8898009b - QueryPerformanceCounter returned a time in
-the past." error code.
-
-I can also confirm the following hack makes the behavior disappear:
-
-```
-diff --git a/hw/i386/kvm/clock.c b/hw/i386/kvm/clock.c
-index 64283358f9..f334bdf35f 100644
---- a/hw/i386/kvm/clock.c
-+++ b/hw/i386/kvm/clock.c
-@@ -332,11 +332,7 @@ void kvmclock_create(void)
- {
-     X86CPU *cpu = X86_CPU(first_cpu);
-
--    if (kvm_enabled() &&
--        cpu->env.features[FEAT_KVM] & ((1ULL << KVM_FEATURE_CLOCKSOURCE) |
--                                       (1ULL << KVM_FEATURE_CLOCKSOURCE2))) {
--        sysbus_create_simple(TYPE_KVM_CLOCK, -1, NULL);
--    }
-+    sysbus_create_simple(TYPE_KVM_CLOCK, -1, NULL);
- }
-
- static void kvmclock_register_types(void)
-diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
-index 32b1453e6a..11d980ba85 100644
---- a/hw/i386/pc_piix.c
-+++ b/hw/i386/pc_piix.c
-@@ -158,9 +158,7 @@ static void pc_init1(MachineState *machine,
-
-     x86_cpus_init(x86ms, pcmc->default_cpu_version);
-
--    if (kvm_enabled() && pcmc->kvmclock_enabled) {
--        kvmclock_create();
--    }
-+    kvmclock_create();
-
-     if (pcmc->pci_enabled) {
-         pci_memory = g_new(MemoryRegion, 1);
-```
-
->
->
-Dave
->
->
-> I have tracked the bug to the fact that `kvmclock` is not exposed and
->
-> disabled from qemu PoV but is in fact used by `hv-time` (in KVM).
->
->
->
-> I think we should enable the `kvmclock` (qemu device) if `hv-time` is
->
-> present and add Hyper-V support for the `kvmclock_current_nsec`
->
-> function.
->
->
->
-> I'm asking for advice because I am unsure this is the _right_ approach
->
-> and how to keep migration compatibility between qemu versions.
->
->
->
-> Thank you all,
->
->
->
-> --
->
-> Antoine 'xdbob' Damhet
->
->
->
---
->
-Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
->
--- 
-Antoine 'xdbob' Damhet
-signature.asc
-Description:
-PGP signature
-
-Antoine Damhet <antoine.damhet@blade-group.com> writes:
-
->
-On Wed, Sep 16, 2020 at 12:29:56PM +0100, Dr. David Alan Gilbert wrote:
->
-> cc'ing in Vitaly who knows about the hv stuff.
->
->
-Thanks
->
->
->
->
-> * Antoine Damhet (antoine.damhet@blade-group.com) wrote:
->
-> > Hi,
->
-> >
->
-> > We are experiencing timestamp rollbacks during live-migration of
->
-> > Windows 10 guests with the following qemu configuration (linux 5.4.46
->
-> > and qemu master):
->
-> > ```
->
-> > $ qemu-system-x86_64 -enable-kvm -cpu host,kvm=off,hv_time [...]
->
-> > ```
->
->
->
-> How big a jump are you seeing, and how did you notice it in the guest?
->
->
-I'm seeing jumps of about the guest uptime (indicating a reset of the
->
-counter). It's expected because we won't call `KVM_SET_CLOCK` to
->
-restore any value.
->
->
-We first noticed it because after some migrations `dwm.exe` crashes with
->
-the "(NTSTATUS) 0x8898009b - QueryPerformanceCounter returned a time in
->
-the past." error code.
->
->
-I can also confirm the following hack makes the behavior disappear:
->
->
-```
->
-diff --git a/hw/i386/kvm/clock.c b/hw/i386/kvm/clock.c
->
-index 64283358f9..f334bdf35f 100644
->
---- a/hw/i386/kvm/clock.c
->
-+++ b/hw/i386/kvm/clock.c
->
-@@ -332,11 +332,7 @@ void kvmclock_create(void)
->
-{
->
-X86CPU *cpu = X86_CPU(first_cpu);
->
->
--    if (kvm_enabled() &&
->
--        cpu->env.features[FEAT_KVM] & ((1ULL << KVM_FEATURE_CLOCKSOURCE) |
->
--                                       (1ULL << KVM_FEATURE_CLOCKSOURCE2))) {
->
--        sysbus_create_simple(TYPE_KVM_CLOCK, -1, NULL);
->
--    }
->
-+    sysbus_create_simple(TYPE_KVM_CLOCK, -1, NULL);
->
-}
->
-Oh, I think I see what's going on. When you add 'kvm=off'
-cpu->env.features[FEAT_KVM] is reset (see x86_cpu_expand_features()) so
-kvmclock QEMU device is not created and nobody calls KVM_SET_CLOCK on
-migration.
-
-In case we really want to support 'kvm=off' I think we can add Hyper-V
-features check here along with KVM, this should do the job.
-
--- 
-Vitaly
-
-Vitaly Kuznetsov <vkuznets@redhat.com> writes:
-
->
-Antoine Damhet <antoine.damhet@blade-group.com> writes:
->
->
-> On Wed, Sep 16, 2020 at 12:29:56PM +0100, Dr. David Alan Gilbert wrote:
->
->> cc'ing in Vitaly who knows about the hv stuff.
->
->
->
-> Thanks
->
->
->
->>
->
->> * Antoine Damhet (antoine.damhet@blade-group.com) wrote:
->
->> > Hi,
->
->> >
->
->> > We are experiencing timestamp rollbacks during live-migration of
->
->> > Windows 10 guests with the following qemu configuration (linux 5.4.46
->
->> > and qemu master):
->
->> > ```
->
->> > $ qemu-system-x86_64 -enable-kvm -cpu host,kvm=off,hv_time [...]
->
->> > ```
->
->>
->
->> How big a jump are you seeing, and how did you notice it in the guest?
->
->
->
-> I'm seeing jumps of about the guest uptime (indicating a reset of the
->
-> counter). It's expected because we won't call `KVM_SET_CLOCK` to
->
-> restore any value.
->
->
->
-> We first noticed it because after some migrations `dwm.exe` crashes with
->
-> the "(NTSTATUS) 0x8898009b - QueryPerformanceCounter returned a time in
->
-> the past." error code.
->
->
->
-> I can also confirm the following hack makes the behavior disappear:
->
->
->
-> ```
->
-> diff --git a/hw/i386/kvm/clock.c b/hw/i386/kvm/clock.c
->
-> index 64283358f9..f334bdf35f 100644
->
-> --- a/hw/i386/kvm/clock.c
->
-> +++ b/hw/i386/kvm/clock.c
->
-> @@ -332,11 +332,7 @@ void kvmclock_create(void)
->
->  {
->
->      X86CPU *cpu = X86_CPU(first_cpu);
->
->
->
-> -    if (kvm_enabled() &&
->
-> -        cpu->env.features[FEAT_KVM] & ((1ULL << KVM_FEATURE_CLOCKSOURCE) |
->
-> -                                       (1ULL << KVM_FEATURE_CLOCKSOURCE2)))
->
-> {
->
-> -        sysbus_create_simple(TYPE_KVM_CLOCK, -1, NULL);
->
-> -    }
->
-> +    sysbus_create_simple(TYPE_KVM_CLOCK, -1, NULL);
->
->  }
->
->
->
->
->
-Oh, I think I see what's going on. When you add 'kvm=off'
->
-cpu->env.features[FEAT_KVM] is reset (see x86_cpu_expand_features()) so
->
-kvmclock QEMU device is not created and nobody calls KVM_SET_CLOCK on
->
-migration.
->
->
-In case we really want to support 'kvm=off' I think we can add Hyper-V
->
-features check here along with KVM, this should do the job.
-Does the untested
-
-diff --git a/hw/i386/kvm/clock.c b/hw/i386/kvm/clock.c
-index 64283358f91d..e03b2ca6d8f6 100644
---- a/hw/i386/kvm/clock.c
-+++ b/hw/i386/kvm/clock.c
-@@ -333,8 +333,9 @@ void kvmclock_create(void)
-     X86CPU *cpu = X86_CPU(first_cpu);
- 
-     if (kvm_enabled() &&
--        cpu->env.features[FEAT_KVM] & ((1ULL << KVM_FEATURE_CLOCKSOURCE) |
--                                       (1ULL << KVM_FEATURE_CLOCKSOURCE2))) {
-+        ((cpu->env.features[FEAT_KVM] & ((1ULL << KVM_FEATURE_CLOCKSOURCE) |
-+                                         (1ULL << KVM_FEATURE_CLOCKSOURCE2))) 
-||
-+         (cpu->env.features[FEAT_HYPERV_EAX] & HV_TIME_REF_COUNT_AVAILABLE))) {
-         sysbus_create_simple(TYPE_KVM_CLOCK, -1, NULL);
-     }
- }
-
-help?
-
-(I don't think we need to remove all 'if (kvm_enabled())' checks from
-machine types as 'kvm=off' should not be related).
-
--- 
-Vitaly
-
-On Wed, Sep 16, 2020 at 02:50:56PM +0200, Vitaly Kuznetsov wrote:
-[...]
-
->
->>
->
->
->
->
->
-> Oh, I think I see what's going on. When you add 'kvm=off'
->
-> cpu->env.features[FEAT_KVM] is reset (see x86_cpu_expand_features()) so
->
-> kvmclock QEMU device is not created and nobody calls KVM_SET_CLOCK on
->
-> migration.
->
->
->
-> In case we really want to support 'kvm=off' I think we can add Hyper-V
->
-> features check here along with KVM, this should do the job.
->
->
-Does the untested
->
->
-diff --git a/hw/i386/kvm/clock.c b/hw/i386/kvm/clock.c
->
-index 64283358f91d..e03b2ca6d8f6 100644
->
---- a/hw/i386/kvm/clock.c
->
-+++ b/hw/i386/kvm/clock.c
->
-@@ -333,8 +333,9 @@ void kvmclock_create(void)
->
-X86CPU *cpu = X86_CPU(first_cpu);
->
->
-if (kvm_enabled() &&
->
--        cpu->env.features[FEAT_KVM] & ((1ULL << KVM_FEATURE_CLOCKSOURCE) |
->
--                                       (1ULL << KVM_FEATURE_CLOCKSOURCE2))) {
->
-+        ((cpu->env.features[FEAT_KVM] & ((1ULL << KVM_FEATURE_CLOCKSOURCE) |
->
-+                                         (1ULL <<
->
-KVM_FEATURE_CLOCKSOURCE2))) ||
->
-+         (cpu->env.features[FEAT_HYPERV_EAX] &
->
-HV_TIME_REF_COUNT_AVAILABLE))) {
->
-sysbus_create_simple(TYPE_KVM_CLOCK, -1, NULL);
->
-}
->
-}
->
->
-help?
-It appears to work :)
-
->
->
-(I don't think we need to remove all 'if (kvm_enabled())' checks from
->
-machine types as 'kvm=off' should not be related).
-Indeed (I didn't look at the macro, it was just quick & dirty).
-
->
->
---
->
-Vitaly
->
->
--- 
-Antoine 'xdbob' Damhet
-signature.asc
-Description:
-PGP signature
-
-On 16/09/20 13:29, Dr. David Alan Gilbert wrote:
->
-> I have tracked the bug to the fact that `kvmclock` is not exposed and
->
-> disabled from qemu PoV but is in fact used by `hv-time` (in KVM).
->
->
->
-> I think we should enable the `kvmclock` (qemu device) if `hv-time` is
->
-> present and add Hyper-V support for the `kvmclock_current_nsec`
->
-> function.
-Yes, this seems correct.  I would have to check but it may even be
-better to _always_ send kvmclock data in the live migration stream.
-
-Paolo
-
-Paolo Bonzini <pbonzini@redhat.com> writes:
-
->
-On 16/09/20 13:29, Dr. David Alan Gilbert wrote:
->
->> I have tracked the bug to the fact that `kvmclock` is not exposed and
->
->> disabled from qemu PoV but is in fact used by `hv-time` (in KVM).
->
->>
->
->> I think we should enable the `kvmclock` (qemu device) if `hv-time` is
->
->> present and add Hyper-V support for the `kvmclock_current_nsec`
->
->> function.
->
->
-Yes, this seems correct.  I would have to check but it may even be
->
-better to _always_ send kvmclock data in the live migration stream.
->
-The question I have is: with 'kvm=off', do we actually restore TSC
-reading on migration? (and I guess the answer is 'no' or Hyper-V TSC
-page would 'just work' I guess). So yea, maybe dropping the
-'cpu->env.features[FEAT_KVM]' check is the right fix.
-
--- 
-Vitaly
-