summary refs log tree commit diff stats
path: root/results/classifier/009/other
diff options
context:
space:
mode:
authorChristian Krinitsin <mail@krinitsin.com>2025-06-05 14:16:30 +0000
committerChristian Krinitsin <mail@krinitsin.com>2025-06-05 14:16:30 +0000
commit140a79ffee69434ba0fbfde4cefb9fe5e82d93b4 (patch)
tree6b9607b2cea6e5fdf90e8964ea1affed408314ca /results/classifier/009/other
parentee1d992c43e7c9b321bdb14e9a8d15371d9bfda7 (diff)
downloadqemu-analysis-140a79ffee69434ba0fbfde4cefb9fe5e82d93b4.tar.gz
qemu-analysis-140a79ffee69434ba0fbfde4cefb9fe5e82d93b4.zip
add results
Diffstat (limited to 'results/classifier/009/other')
-rw-r--r--results/classifier/009/other/02364653373
-rw-r--r--results/classifier/009/other/02572177431
-rw-r--r--results/classifier/009/other/1286920998
-rw-r--r--results/classifier/009/other/13442371379
-rw-r--r--results/classifier/009/other/16201167110
-rw-r--r--results/classifier/009/other/212470351331
-rw-r--r--results/classifier/009/other/23270873702
-rw-r--r--results/classifier/009/other/25842545212
-rw-r--r--results/classifier/009/other/258928271087
-rw-r--r--results/classifier/009/other/32484936233
-rw-r--r--results/classifier/009/other/35170175531
-rw-r--r--results/classifier/009/other/42974450439
-rw-r--r--results/classifier/009/other/55367348542
-rw-r--r--results/classifier/009/other/55753058303
-rw-r--r--results/classifier/009/other/56309929190
-rw-r--r--results/classifier/009/other/56937788354
-rw-r--r--results/classifier/009/other/577565891431
-rw-r--r--results/classifier/009/other/6356565359
-rw-r--r--results/classifier/009/other/657819932803
-rw-r--r--results/classifier/009/other/66743673374
-rw-r--r--results/classifier/009/other/68897003726
-rw-r--r--results/classifier/009/other/700212717458
-rw-r--r--results/classifier/009/other/704164881189
-rw-r--r--results/classifier/009/other/81775929245
24 files changed, 21600 insertions, 0 deletions
diff --git a/results/classifier/009/other/02364653 b/results/classifier/009/other/02364653
new file mode 100644
index 000000000..e6bddb4a4
--- /dev/null
+++ b/results/classifier/009/other/02364653
@@ -0,0 +1,373 @@
+other: 0.956
+graphic: 0.948
+permissions: 0.944
+semantic: 0.942
+debug: 0.940
+PID: 0.938
+performance: 0.934
+device: 0.928
+boot: 0.925
+socket: 0.924
+vnc: 0.922
+KVM: 0.911
+files: 0.908
+network: 0.881
+
+[Qemu-devel] [BUG] Inappropriate size of target_sigset_t
+
+Hello, Peter, Laurent,
+
+While working on another problem yesterday, I think I discovered a 
+long-standing bug in QEMU Linux user mode: our target_sigset_t structure is 
+eight times smaller as it should be!
+
+In this code segment from syscalls_def.h:
+
+#ifdef TARGET_MIPS
+#define TARGET_NSIG        128
+#else
+#define TARGET_NSIG        64
+#endif
+#define TARGET_NSIG_BPW    TARGET_ABI_BITS
+#define TARGET_NSIG_WORDS  (TARGET_NSIG / TARGET_NSIG_BPW)
+
+typedef struct {
+    abi_ulong sig[TARGET_NSIG_WORDS];
+} target_sigset_t;
+
+... TARGET_ABI_BITS should be replaced by eight times smaller constant (in 
+fact, semantically, we need TARGET_ABI_BYTES, but it is not defined) (what is 
+needed is actually "a byte per signal" in target_sigset_t, and we allow "a bit 
+per signal").
+
+All this probably sounds to you like something impossible, since this code is 
+in QEMU "since forever", but I checked everything, and the bug seems real. I 
+wish you can prove me wrong.
+
+I just wanted to let you know about this, given the sensitive timing of current 
+softfreeze, and the fact that I won't be able to do more investigation on this 
+in coming weeks, since I am busy with other tasks, but perhaps you can analyze 
+and do something which you consider appropriate.
+
+Yours,
+Aleksandar
+
+Le 03/07/2019 à 21:46, Aleksandar Markovic a écrit :
+>
+Hello, Peter, Laurent,
+>
+>
+While working on another problem yesterday, I think I discovered a
+>
+long-standing bug in QEMU Linux user mode: our target_sigset_t structure is
+>
+eight times smaller as it should be!
+>
+>
+In this code segment from syscalls_def.h:
+>
+>
+#ifdef TARGET_MIPS
+>
+#define TARGET_NSIG      128
+>
+#else
+>
+#define TARGET_NSIG      64
+>
+#endif
+>
+#define TARGET_NSIG_BPW          TARGET_ABI_BITS
+>
+#define TARGET_NSIG_WORDS  (TARGET_NSIG / TARGET_NSIG_BPW)
+>
+>
+typedef struct {
+>
+abi_ulong sig[TARGET_NSIG_WORDS];
+>
+} target_sigset_t;
+>
+>
+... TARGET_ABI_BITS should be replaced by eight times smaller constant (in
+>
+fact, semantically, we need TARGET_ABI_BYTES, but it is not defined) (what is
+>
+needed is actually "a byte per signal" in target_sigset_t, and we allow "a
+>
+bit per signal").
+TARGET_NSIG is divided by TARGET_ABI_BITS which gives you the number of
+abi_ulong words we need in target_sigset_t.
+
+>
+All this probably sounds to you like something impossible, since this code is
+>
+in QEMU "since forever", but I checked everything, and the bug seems real. I
+>
+wish you can prove me wrong.
+>
+>
+I just wanted to let you know about this, given the sensitive timing of
+>
+current softfreeze, and the fact that I won't be able to do more
+>
+investigation on this in coming weeks, since I am busy with other tasks, but
+>
+perhaps you can analyze and do something which you consider appropriate.
+If I compare with kernel, it looks good:
+
+In Linux:
+
+  arch/mips/include/uapi/asm/signal.h
+
+  #define _NSIG           128
+  #define _NSIG_BPW       (sizeof(unsigned long) * 8)
+  #define _NSIG_WORDS     (_NSIG / _NSIG_BPW)
+
+  typedef struct {
+          unsigned long sig[_NSIG_WORDS];
+  } sigset_t;
+
+_NSIG_BPW is 8 * 8 = 64 on MIPS64 or 4 * 8 = 32 on MIPS
+
+In QEMU:
+
+TARGET_NSIG_BPW is TARGET_ABI_BITS which is  TARGET_LONG_BITS which is
+64 on MIPS64 and 32 on MIPS.
+
+I think there is no problem.
+
+Thanks,
+Laurent
+
+From: Laurent Vivier <address@hidden>
+>
+If I compare with kernel, it looks good:
+>
+...
+>
+I think there is no problem.
+Sure, thanks for such fast response - again, I am glad if you are right. 
+However, for some reason, glibc (and musl too) define sigset_t differently than 
+kernel. Please take a look. I am not sure if this is covered fine in our code.
+
+Yours,
+Aleksandar
+
+>
+Thanks,
+>
+Laurent
+
+On Wed, 3 Jul 2019 at 21:20, Aleksandar Markovic <address@hidden> wrote:
+>
+>
+From: Laurent Vivier <address@hidden>
+>
+> If I compare with kernel, it looks good:
+>
+> ...
+>
+> I think there is no problem.
+>
+>
+Sure, thanks for such fast response - again, I am glad if you are right.
+>
+However, for some reason, glibc (and musl too) define sigset_t differently
+>
+than kernel. Please take a look. I am not sure if this is covered fine in our
+>
+code.
+Yeah, the libc definitions of sigset_t don't match the
+kernel ones (this is for obscure historical reasons IIRC).
+We're providing implementations of the target
+syscall interface, so our target_sigset_t should be the
+target kernel's version (and the target libc's version doesn't
+matter to us). On the other hand we will be using the
+host libc version, I think, so a little caution is required
+and it's possible we have some bugs in our code.
+
+thanks
+-- PMM
+
+>
+From: Peter Maydell <address@hidden>
+>
+>
+On Wed, 3 Jul 2019 at 21:20, Aleksandar Markovic <address@hidden> wrote:
+>
+>
+>
+> From: Laurent Vivier <address@hidden>
+>
+> > If I compare with kernel, it looks good:
+>
+> > ...
+>
+> > I think there is no problem.
+>
+>
+>
+> Sure, thanks for such fast response - again, I am glad if you are right.
+>
+> However, for some reason, glibc (and musl too) define sigset_t differently
+>
+> than kernel. Please take a look. I am not sure if this is covered fine in
+>
+> our code.
+>
+>
+Yeah, the libc definitions of sigset_t don't match the
+>
+kernel ones (this is for obscure historical reasons IIRC).
+>
+We're providing implementations of the target
+>
+syscall interface, so our target_sigset_t should be the
+>
+target kernel's version (and the target libc's version doesn't
+>
+matter to us). On the other hand we will be using the
+>
+host libc version, I think, so a little caution is required
+>
+and it's possible we have some bugs in our code.
+OK, I gather than this is not something that requires our immediate attention 
+(for 4.1), but we can analyze it later on.
+
+Thanks for response!!
+
+Sincerely,
+Aleksandar
+
+>
+thanks
+>
+-- PMM
+
+Le 03/07/2019 à 22:28, Peter Maydell a écrit :
+>
+On Wed, 3 Jul 2019 at 21:20, Aleksandar Markovic <address@hidden> wrote:
+>
+>
+>
+> From: Laurent Vivier <address@hidden>
+>
+>> If I compare with kernel, it looks good:
+>
+>> ...
+>
+>> I think there is no problem.
+>
+>
+>
+> Sure, thanks for such fast response - again, I am glad if you are right.
+>
+> However, for some reason, glibc (and musl too) define sigset_t differently
+>
+> than kernel. Please take a look. I am not sure if this is covered fine in
+>
+> our code.
+>
+>
+Yeah, the libc definitions of sigset_t don't match the
+>
+kernel ones (this is for obscure historical reasons IIRC).
+>
+We're providing implementations of the target
+>
+syscall interface, so our target_sigset_t should be the
+>
+target kernel's version (and the target libc's version doesn't
+>
+matter to us). On the other hand we will be using the
+>
+host libc version, I think, so a little caution is required
+>
+and it's possible we have some bugs in our code.
+It's why we need host_to_target_sigset_internal() and
+target_to_host_sigset_internal() that translates bits and bytes between
+guest kernel interface and host libc interface.
+
+void host_to_target_sigset_internal(target_sigset_t *d,
+                                    const sigset_t *s)
+{
+    int i;
+    target_sigemptyset(d);
+    for (i = 1; i <= TARGET_NSIG; i++) {
+        if (sigismember(s, i)) {
+            target_sigaddset(d, host_to_target_signal(i));
+        }
+    }
+}
+
+void target_to_host_sigset_internal(sigset_t *d,
+                                    const target_sigset_t *s)
+{
+    int i;
+    sigemptyset(d);
+    for (i = 1; i <= TARGET_NSIG; i++) {
+        if (target_sigismember(s, i)) {
+            sigaddset(d, target_to_host_signal(i));
+        }
+    }
+}
+
+Thanks,
+Laurent
+
+Hi Aleksandar,
+
+On Wed, Jul 3, 2019 at 12:48 PM Aleksandar Markovic
+<address@hidden> wrote:
+>
+#define TARGET_NSIG_BPW    TARGET_ABI_BITS
+>
+#define TARGET_NSIG_WORDS  (TARGET_NSIG / TARGET_NSIG_BPW)
+>
+>
+typedef struct {
+>
+abi_ulong sig[TARGET_NSIG_WORDS];
+>
+} target_sigset_t;
+>
+>
+... TARGET_ABI_BITS should be replaced by eight times smaller constant (in
+>
+fact,
+>
+semantically, we need TARGET_ABI_BYTES, but it is not defined) (what is needed
+>
+is actually "a byte per signal" in target_sigset_t, and we allow "a bit per
+>
+signal").
+Why do we need a byte per target signal, if the functions in linux-user/signal.c
+operate with bits?
+
+-- 
+Thanks.
+-- Max
+
+>
+Why do we need a byte per target signal, if the functions in
+>
+linux-user/signal.c
+>
+operate with bits?
+Max,
+
+I did not base my findings on code analysis, but on dumping size/offsets of 
+elements of some structures, as they are emulated in QEMU, and in real systems. 
+So, I can't really answer your question.
+
+Yours,
+Aleksandar
+
+>
+--
+>
+Thanks.
+>
+-- Max
+
diff --git a/results/classifier/009/other/02572177 b/results/classifier/009/other/02572177
new file mode 100644
index 000000000..96f1989e9
--- /dev/null
+++ b/results/classifier/009/other/02572177
@@ -0,0 +1,431 @@
+other: 0.869
+permissions: 0.812
+device: 0.791
+performance: 0.781
+semantic: 0.770
+debug: 0.756
+graphic: 0.747
+socket: 0.742
+PID: 0.731
+network: 0.708
+vnc: 0.706
+KVM: 0.669
+boot: 0.658
+files: 0.640
+
+[Qemu-devel] 答复: Re:  [BUG]COLO failover hang
+
+hi.
+
+
+I test the git qemu master have the same problem.
+
+
+(gdb) bt
+
+
+#0  qio_channel_socket_readv (ioc=0x7f65911b4e50, iov=0x7f64ef3fd880, niov=1, 
+fds=0x0, nfds=0x0, errp=0x0) at io/channel-socket.c:461
+
+
+#1  0x00007f658e4aa0c2 in qio_channel_read (address@hidden, address@hidden "", 
+address@hidden, address@hidden) at io/channel.c:114
+
+
+#2  0x00007f658e3ea990 in channel_get_buffer (opaque=<optimized out>, 
+buf=0x7f65907cb838 "", pos=<optimized out>, size=32768) at 
+migration/qemu-file-channel.c:78
+
+
+#3  0x00007f658e3e97fc in qemu_fill_buffer (f=0x7f65907cb800) at 
+migration/qemu-file.c:295
+
+
+#4  0x00007f658e3ea2e1 in qemu_peek_byte (address@hidden, address@hidden) at 
+migration/qemu-file.c:555
+
+
+#5  0x00007f658e3ea34b in qemu_get_byte (address@hidden) at 
+migration/qemu-file.c:568
+
+
+#6  0x00007f658e3ea552 in qemu_get_be32 (address@hidden) at 
+migration/qemu-file.c:648
+
+
+#7  0x00007f658e3e66e5 in colo_receive_message (f=0x7f65907cb800, 
+address@hidden) at migration/colo.c:244
+
+
+#8  0x00007f658e3e681e in colo_receive_check_message (f=<optimized out>, 
+address@hidden, address@hidden)
+
+
+    at migration/colo.c:264
+
+
+#9  0x00007f658e3e740e in colo_process_incoming_thread (opaque=0x7f658eb30360 
+<mis_current.31286>) at migration/colo.c:577
+
+
+#10 0x00007f658be09df3 in start_thread () from /lib64/libpthread.so.0
+
+
+#11 0x00007f65881983ed in clone () from /lib64/libc.so.6
+
+
+(gdb) p ioc->name
+
+
+$2 = 0x7f658ff7d5c0 "migration-socket-incoming"
+
+
+(gdb) p ioc->features          Do not support QIO_CHANNEL_FEATURE_SHUTDOWN
+
+
+$3 = 0
+
+
+
+
+
+(gdb) bt
+
+
+#0  socket_accept_incoming_migration (ioc=0x7fdcceeafa90, condition=G_IO_IN, 
+opaque=0x7fdcceeafa90) at migration/socket.c:137
+
+
+#1  0x00007fdcc6966350 in g_main_dispatch (context=<optimized out>) at 
+gmain.c:3054
+
+
+#2  g_main_context_dispatch (context=<optimized out>, address@hidden) at 
+gmain.c:3630
+
+
+#3  0x00007fdccb8a6dcc in glib_pollfds_poll () at util/main-loop.c:213
+
+
+#4  os_host_main_loop_wait (timeout=<optimized out>) at util/main-loop.c:258
+
+
+#5  main_loop_wait (address@hidden) at util/main-loop.c:506
+
+
+#6  0x00007fdccb526187 in main_loop () at vl.c:1898
+
+
+#7  main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at 
+vl.c:4709
+
+
+(gdb) p ioc->features
+
+
+$1 = 6
+
+
+(gdb) p ioc->name
+
+
+$2 = 0x7fdcce1b1ab0 "migration-socket-listener"
+
+
+
+
+
+May be socket_accept_incoming_migration should call 
+qio_channel_set_feature(ioc, QIO_CHANNEL_FEATURE_SHUTDOWN)??
+
+
+
+
+
+thank you.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+原始邮件
+
+
+
+发件人: address@hidden
+收件人:王广10165992 address@hidden
+抄送人: address@hidden address@hidden
+日 期 :2017年03月16日 14:46
+主 题 :Re: [Qemu-devel] COLO failover hang
+
+
+
+
+
+
+
+On 03/15/2017 05:06 PM, wangguang wrote:
+>   am testing QEMU COLO feature described here [QEMU
+> Wiki](
+http://wiki.qemu-project.org/Features/COLO
+).
+>
+> When the Primary Node panic,the Secondary Node qemu hang.
+> hang at recvmsg in qio_channel_socket_readv.
+> And  I run  { 'execute': 'nbd-server-stop' } and { "execute":
+> "x-colo-lost-heartbeat" } in Secondary VM's
+> monitor,the  Secondary Node qemu still hang at recvmsg .
+>
+> I found that the colo in qemu is not complete yet.
+> Do the colo have any plan for development?
+
+Yes, We are developing. You can see some of patch we pushing.
+
+> Has anyone ever run it successfully? Any help is appreciated!
+
+In our internal version can run it successfully,
+The failover detail you can ask Zhanghailiang for help.
+Next time if you have some question about COLO,
+please cc me and zhanghailiang address@hidden
+
+
+Thanks
+Zhang Chen
+
+
+>
+>
+>
+> centos7.2+qemu2.7.50
+> (gdb) bt
+> #0  0x00007f3e00cc86ad in recvmsg () from /lib64/libpthread.so.0
+> #1  0x00007f3e0332b738 in qio_channel_socket_readv (ioc=<optimized out>,
+> iov=<optimized out>, niov=<optimized out>, fds=0x0, nfds=0x0, errp=0x0) at
+> io/channel-socket.c:497
+> #2  0x00007f3e03329472 in qio_channel_read (address@hidden,
+> address@hidden "", address@hidden,
+> address@hidden) at io/channel.c:97
+> #3  0x00007f3e032750e0 in channel_get_buffer (opaque=<optimized out>,
+> buf=0x7f3e05910f38 "", pos=<optimized out>, size=32768) at
+> migration/qemu-file-channel.c:78
+> #4  0x00007f3e0327412c in qemu_fill_buffer (f=0x7f3e05910f00) at
+> migration/qemu-file.c:257
+> #5  0x00007f3e03274a41 in qemu_peek_byte (address@hidden,
+> address@hidden) at migration/qemu-file.c:510
+> #6  0x00007f3e03274aab in qemu_get_byte (address@hidden) at
+> migration/qemu-file.c:523
+> #7  0x00007f3e03274cb2 in qemu_get_be32 (address@hidden) at
+> migration/qemu-file.c:603
+> #8  0x00007f3e03271735 in colo_receive_message (f=0x7f3e05910f00,
+> address@hidden) at migration/colo.c:215
+> #9  0x00007f3e0327250d in colo_wait_handle_message (errp=0x7f3d62bfaa48,
+> checkpoint_request=<synthetic pointer>, f=<optimized out>) at
+> migration/colo.c:546
+> #10 colo_process_incoming_thread (opaque=0x7f3e067245e0) at
+> migration/colo.c:649
+> #11 0x00007f3e00cc1df3 in start_thread () from /lib64/libpthread.so.0
+> #12 0x00007f3dfc9c03ed in clone () from /lib64/libc.so.6
+>
+>
+>
+>
+>
+> --
+> View this message in context:
+http://qemu.11.n7.nabble.com/COLO-failover-hang-tp473250.html
+> Sent from the Developer mailing list archive at Nabble.com.
+>
+>
+>
+>
+
+-- 
+Thanks
+Zhang Chen
+
+Hi,Wang.
+
+You can test this branch:
+https://github.com/coloft/qemu/tree/colo-v5.1-developing-COLO-frame-v21-with-shared-disk
+and please follow wiki ensure your own configuration correctly.
+http://wiki.qemu-project.org/Features/COLO
+Thanks
+
+Zhang Chen
+
+
+On 03/21/2017 03:27 PM, address@hidden wrote:
+hi.
+
+I test the git qemu master have the same problem.
+
+(gdb) bt
+#0  qio_channel_socket_readv (ioc=0x7f65911b4e50, iov=0x7f64ef3fd880,
+niov=1, fds=0x0, nfds=0x0, errp=0x0) at io/channel-socket.c:461
+#1  0x00007f658e4aa0c2 in qio_channel_read
+(address@hidden, address@hidden "",
+address@hidden, address@hidden) at io/channel.c:114
+#2  0x00007f658e3ea990 in channel_get_buffer (opaque=<optimized out>,
+buf=0x7f65907cb838 "", pos=<optimized out>, size=32768) at
+migration/qemu-file-channel.c:78
+#3  0x00007f658e3e97fc in qemu_fill_buffer (f=0x7f65907cb800) at
+migration/qemu-file.c:295
+#4  0x00007f658e3ea2e1 in qemu_peek_byte (address@hidden,
+address@hidden) at migration/qemu-file.c:555
+#5  0x00007f658e3ea34b in qemu_get_byte (address@hidden) at
+migration/qemu-file.c:568
+#6  0x00007f658e3ea552 in qemu_get_be32 (address@hidden) at
+migration/qemu-file.c:648
+#7  0x00007f658e3e66e5 in colo_receive_message (f=0x7f65907cb800,
+address@hidden) at migration/colo.c:244
+#8  0x00007f658e3e681e in colo_receive_check_message (f=<optimized
+out>, address@hidden,
+address@hidden)
+at migration/colo.c:264
+#9  0x00007f658e3e740e in colo_process_incoming_thread
+(opaque=0x7f658eb30360 <mis_current.31286>) at migration/colo.c:577
+#10 0x00007f658be09df3 in start_thread () from /lib64/libpthread.so.0
+
+#11 0x00007f65881983ed in clone () from /lib64/libc.so.6
+
+(gdb) p ioc->name
+
+$2 = 0x7f658ff7d5c0 "migration-socket-incoming"
+
+(gdb) p ioc->features        Do not support QIO_CHANNEL_FEATURE_SHUTDOWN
+
+$3 = 0
+
+
+(gdb) bt
+#0  socket_accept_incoming_migration (ioc=0x7fdcceeafa90,
+condition=G_IO_IN, opaque=0x7fdcceeafa90) at migration/socket.c:137
+#1  0x00007fdcc6966350 in g_main_dispatch (context=<optimized out>) at
+gmain.c:3054
+#2  g_main_context_dispatch (context=<optimized out>,
+address@hidden) at gmain.c:3630
+#3  0x00007fdccb8a6dcc in glib_pollfds_poll () at util/main-loop.c:213
+#4  os_host_main_loop_wait (timeout=<optimized out>) at
+util/main-loop.c:258
+#5  main_loop_wait (address@hidden) at
+util/main-loop.c:506
+#6  0x00007fdccb526187 in main_loop () at vl.c:1898
+#7  main (argc=<optimized out>, argv=<optimized out>, envp=<optimized
+out>) at vl.c:4709
+(gdb) p ioc->features
+
+$1 = 6
+
+(gdb) p ioc->name
+
+$2 = 0x7fdcce1b1ab0 "migration-socket-listener"
+May be socket_accept_incoming_migration should
+call qio_channel_set_feature(ioc, QIO_CHANNEL_FEATURE_SHUTDOWN)??
+thank you.
+
+
+
+
+
+原始邮件
+address@hidden;
+*收件人:*王广10165992;address@hidden;
+address@hidden;address@hidden;
+*日 期 :*2017年03月16日 14:46
+*主 题 :**Re: [Qemu-devel] COLO failover hang*
+
+
+
+
+On 03/15/2017 05:06 PM, wangguang wrote:
+>   am testing QEMU COLO feature described here [QEMU
+> Wiki](
+http://wiki.qemu-project.org/Features/COLO
+).
+>
+> When the Primary Node panic,the Secondary Node qemu hang.
+> hang at recvmsg in qio_channel_socket_readv.
+> And  I run  { 'execute': 'nbd-server-stop' } and { "execute":
+> "x-colo-lost-heartbeat" } in Secondary VM's
+> monitor,the  Secondary Node qemu still hang at recvmsg .
+>
+> I found that the colo in qemu is not complete yet.
+> Do the colo have any plan for development?
+
+Yes, We are developing. You can see some of patch we pushing.
+
+> Has anyone ever run it successfully? Any help is appreciated!
+
+In our internal version can run it successfully,
+The failover detail you can ask Zhanghailiang for help.
+Next time if you have some question about COLO,
+please cc me and zhanghailiang address@hidden
+
+
+Thanks
+Zhang Chen
+
+
+>
+>
+>
+> centos7.2+qemu2.7.50
+> (gdb) bt
+> #0  0x00007f3e00cc86ad in recvmsg () from /lib64/libpthread.so.0
+> #1  0x00007f3e0332b738 in qio_channel_socket_readv (ioc=<optimized out>,
+> iov=<optimized out>, niov=<optimized out>, fds=0x0, nfds=0x0, errp=0x0) at
+> io/channel-socket.c:497
+> #2  0x00007f3e03329472 in qio_channel_read (address@hidden,
+> address@hidden "", address@hidden,
+> address@hidden) at io/channel.c:97
+> #3  0x00007f3e032750e0 in channel_get_buffer (opaque=<optimized out>,
+> buf=0x7f3e05910f38 "", pos=<optimized out>, size=32768) at
+> migration/qemu-file-channel.c:78
+> #4  0x00007f3e0327412c in qemu_fill_buffer (f=0x7f3e05910f00) at
+> migration/qemu-file.c:257
+> #5  0x00007f3e03274a41 in qemu_peek_byte (address@hidden,
+> address@hidden) at migration/qemu-file.c:510
+> #6  0x00007f3e03274aab in qemu_get_byte (address@hidden) at
+> migration/qemu-file.c:523
+> #7  0x00007f3e03274cb2 in qemu_get_be32 (address@hidden) at
+> migration/qemu-file.c:603
+> #8  0x00007f3e03271735 in colo_receive_message (f=0x7f3e05910f00,
+> address@hidden) at migration/colo.c:215
+> #9  0x00007f3e0327250d in colo_wait_handle_message (errp=0x7f3d62bfaa48,
+> checkpoint_request=<synthetic pointer>, f=<optimized out>) at
+> migration/colo.c:546
+> #10 colo_process_incoming_thread (opaque=0x7f3e067245e0) at
+> migration/colo.c:649
+> #11 0x00007f3e00cc1df3 in start_thread () from /lib64/libpthread.so.0
+> #12 0x00007f3dfc9c03ed in clone () from /lib64/libc.so.6
+>
+>
+>
+>
+>
+> --
+> View this message in context:
+http://qemu.11.n7.nabble.com/COLO-failover-hang-tp473250.html
+> Sent from the Developer mailing list archive at Nabble.com.
+>
+>
+>
+>
+
+--
+Thanks
+Zhang Chen
+--
+Thanks
+Zhang Chen
+
diff --git a/results/classifier/009/other/12869209 b/results/classifier/009/other/12869209
new file mode 100644
index 000000000..7eb2a9f04
--- /dev/null
+++ b/results/classifier/009/other/12869209
@@ -0,0 +1,98 @@
+other: 0.964
+device: 0.951
+files: 0.945
+permissions: 0.922
+performance: 0.906
+socket: 0.906
+PID: 0.894
+semantic: 0.891
+vnc: 0.885
+graphic: 0.879
+network: 0.858
+KVM: 0.857
+boot: 0.837
+debug: 0.813
+
+[BUG FIX][PATCH v3 0/3] vhost-user-blk: fix bug on device disconnection during initialization
+
+This is a series fixing a bug in
+          host-user-blk.
+Is there any chance for it to be considered for the next rc?
+Thanks!
+Denis
+On 29.03.2021 16:44, Denis Plotnikov
+      wrote:
+ping!
+On 25.03.2021 18:12, Denis Plotnikov
+        wrote:
+v3:
+  * 0003: a new patch added fixing the problem on vm shutdown
+    I stumbled on this bug after v2 sending.
+  * 0001: gramma fixing (Raphael)
+  * 0002: commit message fixing (Raphael)
+
+v2:
+  * split the initial patch into two (Raphael)
+  * rename init to realized (Raphael)
+  * remove unrelated comment (Raphael)
+
+When the vhost-user-blk device lose the connection to the daemon during
+the initialization phase it kills qemu because of the assert in the code.
+The series fixes the bug.
+
+0001 is preparation for the fix
+0002 fixes the bug, patch description has the full motivation for the series
+0003 (added in v3) fix bug on vm shutdown
+
+Denis Plotnikov (3):
+  vhost-user-blk: use different event handlers on initialization
+  vhost-user-blk: perform immediate cleanup if disconnect on
+    initialization
+  vhost-user-blk: add immediate cleanup on shutdown
+
+ hw/block/vhost-user-blk.c | 79 ++++++++++++++++++++++++---------------
+ 1 file changed, 48 insertions(+), 31 deletions(-)
+
+On 01.04.2021 14:21, Denis Plotnikov wrote:
+This is a series fixing a bug in host-user-blk.
+More specifically, it's not just a bug but crasher.
+
+Valentine
+Is there any chance for it to be considered for the next rc?
+
+Thanks!
+
+Denis
+
+On 29.03.2021 16:44, Denis Plotnikov wrote:
+ping!
+
+On 25.03.2021 18:12, Denis Plotnikov wrote:
+v3:
+   * 0003: a new patch added fixing the problem on vm shutdown
+     I stumbled on this bug after v2 sending.
+   * 0001: gramma fixing (Raphael)
+   * 0002: commit message fixing (Raphael)
+
+v2:
+   * split the initial patch into two (Raphael)
+   * rename init to realized (Raphael)
+   * remove unrelated comment (Raphael)
+
+When the vhost-user-blk device lose the connection to the daemon during
+the initialization phase it kills qemu because of the assert in the code.
+The series fixes the bug.
+
+0001 is preparation for the fix
+0002 fixes the bug, patch description has the full motivation for the series
+0003 (added in v3) fix bug on vm shutdown
+
+Denis Plotnikov (3):
+   vhost-user-blk: use different event handlers on initialization
+   vhost-user-blk: perform immediate cleanup if disconnect on
+     initialization
+   vhost-user-blk: add immediate cleanup on shutdown
+
+  hw/block/vhost-user-blk.c | 79 ++++++++++++++++++++++++---------------
+  1 file changed, 48 insertions(+), 31 deletions(-)
+
diff --git a/results/classifier/009/other/13442371 b/results/classifier/009/other/13442371
new file mode 100644
index 000000000..6ea769a1c
--- /dev/null
+++ b/results/classifier/009/other/13442371
@@ -0,0 +1,379 @@
+other: 0.886
+device: 0.883
+KVM: 0.872
+debug: 0.866
+vnc: 0.858
+permissions: 0.854
+graphic: 0.850
+semantic: 0.850
+PID: 0.849
+performance: 0.841
+files: 0.837
+socket: 0.831
+boot: 0.815
+network: 0.811
+
+[Qemu-devel] [BUG] nanoMIPS support problem related to extract2 support for i386 TCG target
+
+Hello, Richard, Peter, and others.
+
+As a part of activities before 4.1 release, I tested nanoMIPS support
+in QEMU (which was officially fully integrated in 4.0, is currently
+limited to system mode only, and was tested in a similar fashion right
+prior to 4.0).
+
+This support appears to be broken now. Following command line works in
+4.0, but results in kernel panic for the current tip of the tree:
+
+~/Build/qemu-test-revert-c6fb8c0cf704/mipsel-softmmu/qemu-system-mipsel
+-cpu I7200 -kernel generic_nano32r6el_page4k -M malta -serial stdio -m
+1G -hda nanomips32r6_le_sf_2017.05-03-59-gf5595d6.ext4 -append
+"mem=256m@0x0 rw console=ttyS0 vga=cirrus vesa=0x111 root=/dev/sda"
+
+(kernel and rootfs image files used in this commend line can be
+downloaded from the locations mentioned in our user guide)
+
+The quick bisect points to the commit:
+
+commit c6fb8c0cf704c4a1a48c3e99e995ad4c58150dab
+Author: Richard Henderson <address@hidden>
+Date:   Mon Feb 25 11:42:35 2019 -0800
+
+    tcg/i386: Support INDEX_op_extract2_{i32,i64}
+
+    Signed-off-by: Richard Henderson <address@hidden>
+
+Please advise on further actions.
+
+Yours,
+Aleksandar
+
+On Fri, Jul 12, 2019 at 8:09 PM Aleksandar Markovic
+<address@hidden> wrote:
+>
+>
+Hello, Richard, Peter, and others.
+>
+>
+As a part of activities before 4.1 release, I tested nanoMIPS support
+>
+in QEMU (which was officially fully integrated in 4.0, is currently
+>
+limited to system mode only, and was tested in a similar fashion right
+>
+prior to 4.0).
+>
+>
+This support appears to be broken now. Following command line works in
+>
+4.0, but results in kernel panic for the current tip of the tree:
+>
+>
+~/Build/qemu-test-revert-c6fb8c0cf704/mipsel-softmmu/qemu-system-mipsel
+>
+-cpu I7200 -kernel generic_nano32r6el_page4k -M malta -serial stdio -m
+>
+1G -hda nanomips32r6_le_sf_2017.05-03-59-gf5595d6.ext4 -append
+>
+"mem=256m@0x0 rw console=ttyS0 vga=cirrus vesa=0x111 root=/dev/sda"
+>
+>
+(kernel and rootfs image files used in this commend line can be
+>
+downloaded from the locations mentioned in our user guide)
+>
+>
+The quick bisect points to the commit:
+>
+>
+commit c6fb8c0cf704c4a1a48c3e99e995ad4c58150dab
+>
+Author: Richard Henderson <address@hidden>
+>
+Date:   Mon Feb 25 11:42:35 2019 -0800
+>
+>
+tcg/i386: Support INDEX_op_extract2_{i32,i64}
+>
+>
+Signed-off-by: Richard Henderson <address@hidden>
+>
+>
+Please advise on further actions.
+>
+Just to add a data point:
+
+If the following change is applied:
+
+diff --git a/tcg/i386/tcg-target.h b/tcg/i386/tcg-target.h
+index 928e8b8..b6a4cf2 100644
+--- a/tcg/i386/tcg-target.h
++++ b/tcg/i386/tcg-target.h
+@@ -124,7 +124,7 @@ extern bool have_avx2;
+ #define TCG_TARGET_HAS_deposit_i32      1
+ #define TCG_TARGET_HAS_extract_i32      1
+ #define TCG_TARGET_HAS_sextract_i32     1
+-#define TCG_TARGET_HAS_extract2_i32     1
++#define TCG_TARGET_HAS_extract2_i32     0
+ #define TCG_TARGET_HAS_movcond_i32      1
+ #define TCG_TARGET_HAS_add2_i32         1
+ #define TCG_TARGET_HAS_sub2_i32         1
+@@ -163,7 +163,7 @@ extern bool have_avx2;
+ #define TCG_TARGET_HAS_deposit_i64      1
+ #define TCG_TARGET_HAS_extract_i64      1
+ #define TCG_TARGET_HAS_sextract_i64     0
+-#define TCG_TARGET_HAS_extract2_i64     1
++#define TCG_TARGET_HAS_extract2_i64     0
+ #define TCG_TARGET_HAS_movcond_i64      1
+ #define TCG_TARGET_HAS_add2_i64         1
+ #define TCG_TARGET_HAS_sub2_i64         1
+
+... the problem disappears.
+
+
+>
+Yours,
+>
+Aleksandar
+
+On Fri, Jul 12, 2019 at 8:19 PM Aleksandar Markovic
+<address@hidden> wrote:
+>
+>
+On Fri, Jul 12, 2019 at 8:09 PM Aleksandar Markovic
+>
+<address@hidden> wrote:
+>
+>
+>
+> Hello, Richard, Peter, and others.
+>
+>
+>
+> As a part of activities before 4.1 release, I tested nanoMIPS support
+>
+> in QEMU (which was officially fully integrated in 4.0, is currently
+>
+> limited to system mode only, and was tested in a similar fashion right
+>
+> prior to 4.0).
+>
+>
+>
+> This support appears to be broken now. Following command line works in
+>
+> 4.0, but results in kernel panic for the current tip of the tree:
+>
+>
+>
+> ~/Build/qemu-test-revert-c6fb8c0cf704/mipsel-softmmu/qemu-system-mipsel
+>
+> -cpu I7200 -kernel generic_nano32r6el_page4k -M malta -serial stdio -m
+>
+> 1G -hda nanomips32r6_le_sf_2017.05-03-59-gf5595d6.ext4 -append
+>
+> "mem=256m@0x0 rw console=ttyS0 vga=cirrus vesa=0x111 root=/dev/sda"
+>
+>
+>
+> (kernel and rootfs image files used in this commend line can be
+>
+> downloaded from the locations mentioned in our user guide)
+>
+>
+>
+> The quick bisect points to the commit:
+>
+>
+>
+> commit c6fb8c0cf704c4a1a48c3e99e995ad4c58150dab
+>
+> Author: Richard Henderson <address@hidden>
+>
+> Date:   Mon Feb 25 11:42:35 2019 -0800
+>
+>
+>
+>     tcg/i386: Support INDEX_op_extract2_{i32,i64}
+>
+>
+>
+>     Signed-off-by: Richard Henderson <address@hidden>
+>
+>
+>
+> Please advise on further actions.
+>
+>
+>
+>
+Just to add a data point:
+>
+>
+If the following change is applied:
+>
+>
+diff --git a/tcg/i386/tcg-target.h b/tcg/i386/tcg-target.h
+>
+index 928e8b8..b6a4cf2 100644
+>
+--- a/tcg/i386/tcg-target.h
+>
++++ b/tcg/i386/tcg-target.h
+>
+@@ -124,7 +124,7 @@ extern bool have_avx2;
+>
+#define TCG_TARGET_HAS_deposit_i32      1
+>
+#define TCG_TARGET_HAS_extract_i32      1
+>
+#define TCG_TARGET_HAS_sextract_i32     1
+>
+-#define TCG_TARGET_HAS_extract2_i32     1
+>
++#define TCG_TARGET_HAS_extract2_i32     0
+>
+#define TCG_TARGET_HAS_movcond_i32      1
+>
+#define TCG_TARGET_HAS_add2_i32         1
+>
+#define TCG_TARGET_HAS_sub2_i32         1
+>
+@@ -163,7 +163,7 @@ extern bool have_avx2;
+>
+#define TCG_TARGET_HAS_deposit_i64      1
+>
+#define TCG_TARGET_HAS_extract_i64      1
+>
+#define TCG_TARGET_HAS_sextract_i64     0
+>
+-#define TCG_TARGET_HAS_extract2_i64     1
+>
++#define TCG_TARGET_HAS_extract2_i64     0
+>
+#define TCG_TARGET_HAS_movcond_i64      1
+>
+#define TCG_TARGET_HAS_add2_i64         1
+>
+#define TCG_TARGET_HAS_sub2_i64         1
+>
+>
+... the problem disappears.
+>
+It looks the problem is in this code segment in of tcg_gen_deposit_i32():
+
+        if (ofs == 0) {
+            tcg_gen_extract2_i32(ret, arg1, arg2, len);
+            tcg_gen_rotli_i32(ret, ret, len);
+            goto done;
+        }
+
+)
+
+If that code segment is deleted altogether (which effectively forces
+usage of "fallback" part of tcg_gen_deposit_i32()), the problem also
+vanishes (without changes from my previous mail).
+
+>
+>
+> Yours,
+>
+> Aleksandar
+
+Aleksandar Markovic <address@hidden> writes:
+
+>
+Hello, Richard, Peter, and others.
+>
+>
+As a part of activities before 4.1 release, I tested nanoMIPS support
+>
+in QEMU (which was officially fully integrated in 4.0, is currently
+>
+limited to system mode only, and was tested in a similar fashion right
+>
+prior to 4.0).
+>
+>
+This support appears to be broken now. Following command line works in
+>
+4.0, but results in kernel panic for the current tip of the tree:
+>
+>
+~/Build/qemu-test-revert-c6fb8c0cf704/mipsel-softmmu/qemu-system-mipsel
+>
+-cpu I7200 -kernel generic_nano32r6el_page4k -M malta -serial stdio -m
+>
+1G -hda nanomips32r6_le_sf_2017.05-03-59-gf5595d6.ext4 -append
+>
+"mem=256m@0x0 rw console=ttyS0 vga=cirrus vesa=0x111 root=/dev/sda"
+>
+>
+(kernel and rootfs image files used in this commend line can be
+>
+downloaded from the locations mentioned in our user guide)
+>
+>
+The quick bisect points to the commit:
+>
+>
+commit c6fb8c0cf704c4a1a48c3e99e995ad4c58150dab
+>
+Author: Richard Henderson <address@hidden>
+>
+Date:   Mon Feb 25 11:42:35 2019 -0800
+>
+>
+tcg/i386: Support INDEX_op_extract2_{i32,i64}
+>
+>
+Signed-off-by: Richard Henderson <address@hidden>
+>
+>
+Please advise on further actions.
+Please see the fix:
+
+  Subject: [PATCH for-4.1] tcg: Fix constant folding of INDEX_op_extract2_i32
+  Date: Tue,  9 Jul 2019 14:19:00 +0200
+  Message-Id: <address@hidden>
+
+>
+>
+Yours,
+>
+Aleksandar
+--
+Alex Bennée
+
+On Sat, Jul 13, 2019 at 9:21 AM Alex Bennée <address@hidden> wrote:
+>
+>
+Please see the fix:
+>
+>
+Subject: [PATCH for-4.1] tcg: Fix constant folding of INDEX_op_extract2_i32
+>
+Date: Tue,  9 Jul 2019 14:19:00 +0200
+>
+Message-Id: <address@hidden>
+>
+Thanks, this fixed the behavior.
+
+Sincerely,
+Aleksandar
+
+>
+>
+>
+>
+> Yours,
+>
+> Aleksandar
+>
+>
+>
+--
+>
+Alex Bennée
+>
+
diff --git a/results/classifier/009/other/16201167 b/results/classifier/009/other/16201167
new file mode 100644
index 000000000..5766e79dd
--- /dev/null
+++ b/results/classifier/009/other/16201167
@@ -0,0 +1,110 @@
+other: 0.954
+vnc: 0.946
+permissions: 0.939
+graphic: 0.937
+debug: 0.935
+semantic: 0.933
+KVM: 0.928
+performance: 0.927
+device: 0.911
+files: 0.900
+PID: 0.899
+socket: 0.892
+network: 0.864
+boot: 0.845
+
+[BUG] Qemu abort with error "kvm_mem_ioeventfd_add: error adding ioeventfd: File exists (17)"
+
+Hi list,
+
+When I did some tests in my virtual domain with live-attached virtio deivces, I 
+got a coredump file of Qemu.
+
+The error print from qemu is "kvm_mem_ioeventfd_add: error adding ioeventfd: 
+File exists (17)".
+And the call trace in the coredump file displays as below:
+#0  0x0000ffff89acecc8 in ?? () from /usr/lib64/libc.so.6
+#1  0x0000ffff89a8acbc in raise () from /usr/lib64/libc.so.6
+#2  0x0000ffff89a78d2c in abort () from /usr/lib64/libc.so.6
+#3  0x0000aaaabd7ccf1c in kvm_mem_ioeventfd_add (listener=<optimized out>, 
+section=<optimized out>, match_data=<optimized out>, data=<optimized out>, 
+e=<optimized out>) at ../accel/kvm/kvm-all.c:1607
+#4  0x0000aaaabd6e0304 in address_space_add_del_ioeventfds (fds_old_nb=164, 
+fds_old=0xffff5c80a1d0, fds_new_nb=160, fds_new=0xffff5c565080, 
+as=0xaaaabdfa8810 <address_space_memory>)
+    at ../softmmu/memory.c:795
+#5  address_space_update_ioeventfds (as=0xaaaabdfa8810 <address_space_memory>) 
+at ../softmmu/memory.c:856
+#6  0x0000aaaabd6e24d8 in memory_region_commit () at ../softmmu/memory.c:1113
+#7  0x0000aaaabd6e25c4 in memory_region_transaction_commit () at 
+../softmmu/memory.c:1144
+#8  0x0000aaaabd394eb4 in pci_bridge_update_mappings 
+(br=br@entry=0xaaaae755f7c0) at ../hw/pci/pci_bridge.c:248
+#9  0x0000aaaabd394f4c in pci_bridge_write_config (d=0xaaaae755f7c0, 
+address=44, val=<optimized out>, len=4) at ../hw/pci/pci_bridge.c:272
+#10 0x0000aaaabd39a928 in rp_write_config (d=0xaaaae755f7c0, address=44, 
+val=128, len=4) at ../hw/pci-bridge/pcie_root_port.c:39
+#11 0x0000aaaabd6df328 in memory_region_write_accessor (mr=0xaaaae63898d0, 
+addr=65580, value=<optimized out>, size=4, shift=<optimized out>, 
+mask=<optimized out>, attrs=...) at ../softmmu/memory.c:494
+#12 0x0000aaaabd6dcb6c in access_with_adjusted_size (addr=addr@entry=65580, 
+value=value@entry=0xffff817adc78, size=size@entry=4, access_size_min=<optimized 
+out>, access_size_max=<optimized out>,
+    access_fn=access_fn@entry=0xaaaabd6df284 <memory_region_write_accessor>, 
+mr=mr@entry=0xaaaae63898d0, attrs=attrs@entry=...) at ../softmmu/memory.c:556
+#13 0x0000aaaabd6e0dc8 in memory_region_dispatch_write 
+(mr=mr@entry=0xaaaae63898d0, addr=65580, data=<optimized out>, op=MO_32, 
+attrs=attrs@entry=...) at ../softmmu/memory.c:1534
+#14 0x0000aaaabd6d0574 in flatview_write_continue (fv=fv@entry=0xffff5c02da00, 
+addr=addr@entry=275146407980, attrs=attrs@entry=..., 
+ptr=ptr@entry=0xffff8aa8c028, len=len@entry=4,
+    addr1=<optimized out>, l=<optimized out>, mr=mr@entry=0xaaaae63898d0) at 
+/usr/src/debug/qemu-6.2.0-226.aarch64/include/qemu/host-utils.h:165
+#15 0x0000aaaabd6d4584 in flatview_write (len=4, buf=0xffff8aa8c028, attrs=..., 
+addr=275146407980, fv=0xffff5c02da00) at ../softmmu/physmem.c:3375
+#16 address_space_write (as=<optimized out>, addr=275146407980, attrs=..., 
+buf=buf@entry=0xffff8aa8c028, len=4) at ../softmmu/physmem.c:3467
+#17 0x0000aaaabd6d462c in address_space_rw (as=<optimized out>, addr=<optimized 
+out>, attrs=..., attrs@entry=..., buf=buf@entry=0xffff8aa8c028, len=<optimized 
+out>, is_write=<optimized out>)
+    at ../softmmu/physmem.c:3477
+#18 0x0000aaaabd7cf6e8 in kvm_cpu_exec (cpu=cpu@entry=0xaaaae625dfd0) at 
+../accel/kvm/kvm-all.c:2970
+#19 0x0000aaaabd7d09bc in kvm_vcpu_thread_fn (arg=arg@entry=0xaaaae625dfd0) at 
+../accel/kvm/kvm-accel-ops.c:49
+#20 0x0000aaaabd94ccd8 in qemu_thread_start (args=<optimized out>) at 
+../util/qemu-thread-posix.c:559
+
+
+By printing more info in the coredump file, I found that the addr of 
+fds_old[146] and fds_new[146] are same, but fds_old[146] belonged to a 
+live-attached virtio-scsi device while fds_new[146] was owned by another 
+live-attached virtio-net.
+The reason why addr conflicted was then been found from vm's console log. Just 
+before qemu aborted, the guest kernel crashed and kdump.service booted the 
+dump-capture kernel where re-alloced address for the devices.
+Because those virtio devices were live-attached after vm creating, different 
+addr may been assigned to them in the dump-capture kernel:
+
+the initial kernel booting log:
+[    1.663297] pci 0000:00:02.1: BAR 14: assigned [mem 0x11900000-0x11afffff]
+[    1.664560] pci 0000:00:02.1: BAR 15: assigned [mem 
+0x8001800000-0x80019fffff 64bit pref]
+
+the dump-capture kernel booting log:
+[    1.845211] pci 0000:00:02.0: BAR 14: assigned [mem 0x11900000-0x11bfffff]
+[    1.846542] pci 0000:00:02.0: BAR 15: assigned [mem 
+0x8001800000-0x8001afffff 64bit pref]
+
+
+I think directly aborting the qemu process may not be the best choice in this 
+case cuz it will interrupt the work of kdump.service so that failed to generate 
+memory dump of the crashed guest kernel.
+Perhaps, IMO, the error could be simply ignored in this case and just let kdump 
+to reboot the system after memory-dump finishing, but I failed to find a 
+suitable judgment in the codes.
+
+Any solution for this problem? Hope I can get some helps here.
+
+Hao
+
diff --git a/results/classifier/009/other/21247035 b/results/classifier/009/other/21247035
new file mode 100644
index 000000000..ae43e9314
--- /dev/null
+++ b/results/classifier/009/other/21247035
@@ -0,0 +1,1331 @@
+other: 0.640
+permissions: 0.541
+device: 0.525
+KVM: 0.514
+debug: 0.468
+performance: 0.427
+graphic: 0.426
+files: 0.375
+semantic: 0.374
+PID: 0.370
+vnc: 0.367
+boot: 0.345
+network: 0.322
+socket: 0.322
+
+[Qemu-devel] [BUG] I/O thread segfault for QEMU on s390x
+
+Hi,
+I have been noticing some segfaults for QEMU on s390x, and I have been
+hitting this issue quite reliably (at least once in 10 runs of a test
+case). The qemu version is 2.11.50, and I have systemd created coredumps
+when this happens.
+
+Here is a back trace of the segfaulting thread:
+
+
+#0  0x000003ffafed202c in swapcontext () from /lib64/libc.so.6
+#1  0x000002aa355c02ee in qemu_coroutine_new () at
+util/coroutine-ucontext.c:164
+#2  0x000002aa355bec34 in qemu_coroutine_create
+(address@hidden <blk_aio_read_entry>,
+address@hidden) at util/qemu-coroutine.c:76
+#3  0x000002aa35510262 in blk_aio_prwv (blk=0x2aa65fbefa0,
+offset=<optimized out>, bytes=<optimized out>, qiov=0x3ffa002a9c0,
+address@hidden <blk_aio_read_entry>, flags=0,
+cb=0x2aa35340a50 <virtio_blk_rw_complete>, opaque=0x3ffa002a960) at
+block/block-backend.c:1299
+#4  0x000002aa35510376 in blk_aio_preadv (blk=<optimized out>,
+offset=<optimized out>, qiov=<optimized out>, flags=<optimized out>,
+cb=<optimized out>, opaque=0x3ffa002a960) at block/block-backend.c:1392
+#5  0x000002aa3534114e in submit_requests (niov=<optimized out>,
+num_reqs=<optimized out>, start=<optimized out>, mrb=<optimized out>,
+blk=<optimized out>) at
+/usr/src/debug/qemu-2.11.50/hw/block/virtio-blk.c:372
+#6  virtio_blk_submit_multireq (blk=<optimized out>,
+address@hidden) at
+/usr/src/debug/qemu-2.11.50/hw/block/virtio-blk.c:402
+#7  0x000002aa353422e0 in virtio_blk_handle_vq (s=0x2aa6611e7d8,
+vq=0x3ffb0f5f010) at /usr/src/debug/qemu-2.11.50/hw/block/virtio-blk.c:620
+#8  0x000002aa3536655a in virtio_queue_notify_aio_vq
+(address@hidden) at
+/usr/src/debug/qemu-2.11.50/hw/virtio/virtio.c:1515
+#9  0x000002aa35366cd6 in virtio_queue_notify_aio_vq (vq=0x3ffb0f5f010)
+at /usr/src/debug/qemu-2.11.50/hw/virtio/virtio.c:1511
+#10 virtio_queue_host_notifier_aio_poll (opaque=0x3ffb0f5f078) at
+/usr/src/debug/qemu-2.11.50/hw/virtio/virtio.c:2409
+#11 0x000002aa355a8ba4 in run_poll_handlers_once
+(address@hidden) at util/aio-posix.c:497
+#12 0x000002aa355a9b74 in run_poll_handlers (max_ns=<optimized out>,
+ctx=0x2aa65f99310) at util/aio-posix.c:534
+#13 try_poll_mode (blocking=true, ctx=0x2aa65f99310) at util/aio-posix.c:562
+#14 aio_poll (ctx=0x2aa65f99310, address@hidden) at
+util/aio-posix.c:602
+#15 0x000002aa353d2d0a in iothread_run (opaque=0x2aa65f990f0) at
+iothread.c:60
+#16 0x000003ffb0f07e82 in start_thread () from /lib64/libpthread.so.0
+#17 0x000003ffaff91596 in thread_start () from /lib64/libc.so.6
+I don't have much knowledge about i/o threads and the block layer code
+in QEMU, so I would like to report to the community about this issue.
+I believe this very similar to the bug that I reported upstream couple
+of days ago
+(
+https://lists.gnu.org/archive/html/qemu-devel/2018-02/msg04452.html
+).
+Any help would be greatly appreciated.
+
+Thanks
+Farhan
+
+On Thu, Mar 1, 2018 at 10:33 PM, Farhan Ali <address@hidden> wrote:
+>
+Hi,
+>
+>
+I have been noticing some segfaults for QEMU on s390x, and I have been
+>
+hitting this issue quite reliably (at least once in 10 runs of a test case).
+>
+The qemu version is 2.11.50, and I have systemd created coredumps
+>
+when this happens.
+Can you describe the test case or suggest how to reproduce it for us?
+
+Fam
+
+On 03/02/2018 01:13 AM, Fam Zheng wrote:
+On Thu, Mar 1, 2018 at 10:33 PM, Farhan Ali <address@hidden> wrote:
+Hi,
+
+I have been noticing some segfaults for QEMU on s390x, and I have been
+hitting this issue quite reliably (at least once in 10 runs of a test case).
+The qemu version is 2.11.50, and I have systemd created coredumps
+when this happens.
+Can you describe the test case or suggest how to reproduce it for us?
+
+Fam
+The test case is with a single guest, running a memory intensive
+workload. The guest has 8 vpcus and 4G of memory.
+Here is the qemu command line, if that helps:
+
+/usr/bin/qemu-kvm -name guest=sles,debug-threads=on \
+-S -object
+secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-2-sles/master-key.aes
+\
+-machine s390-ccw-virtio-2.12,accel=kvm,usb=off,dump-guest-core=off \
+-m 4096 -realtime mlock=off -smp 8,sockets=8,cores=1,threads=1 \
+-object iothread,id=iothread1 -object iothread,id=iothread2 -uuid
+b83a596b-3a1a-4ac9-9f3e-d9a4032ee52c \
+-display none -no-user-config -nodefaults -chardev
+socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-2-sles/monitor.sock,server,nowait
+-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc
+-no-shutdown \
+-boot strict=on -drive
+file=/dev/mapper/360050763998b0883980000002400002b,format=raw,if=none,id=drive-virtio-disk0,cache=none,aio=native
+-device
+virtio-blk-ccw,iothread=iothread1,scsi=off,devno=fe.0.0001,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
+-drive
+file=/dev/mapper/360050763998b0883980000002800002f,format=raw,if=none,id=drive-virtio-disk1,cache=none,aio=native
+-device
+virtio-blk-ccw,iothread=iothread2,scsi=off,devno=fe.0.0002,drive=drive-virtio-disk1,id=virtio-disk1
+-netdev tap,fd=24,id=hostnet0,vhost=on,vhostfd=26 -device
+virtio-net-ccw,netdev=hostnet0,id=net0,mac=02:38:a6:36:e8:1f,devno=fe.0.0000
+-chardev pty,id=charconsole0 -device
+sclpconsole,chardev=charconsole0,id=console0 -device
+virtio-balloon-ccw,id=balloon0,devno=fe.3.ffba -msg timestamp=on
+Please let me know if I need to provide any other information.
+
+Thanks
+Farhan
+
+On Thu, Mar 01, 2018 at 09:33:35AM -0500, Farhan Ali wrote:
+>
+Hi,
+>
+>
+I have been noticing some segfaults for QEMU on s390x, and I have been
+>
+hitting this issue quite reliably (at least once in 10 runs of a test case).
+>
+The qemu version is 2.11.50, and I have systemd created coredumps
+>
+when this happens.
+>
+>
+Here is a back trace of the segfaulting thread:
+The backtrace looks normal.
+
+Please post the QEMU command-line and the details of the segfault (which
+memory access faulted?).
+
+>
+#0  0x000003ffafed202c in swapcontext () from /lib64/libc.so.6
+>
+#1  0x000002aa355c02ee in qemu_coroutine_new () at
+>
+util/coroutine-ucontext.c:164
+>
+#2  0x000002aa355bec34 in qemu_coroutine_create
+>
+(address@hidden <blk_aio_read_entry>,
+>
+address@hidden) at util/qemu-coroutine.c:76
+>
+#3  0x000002aa35510262 in blk_aio_prwv (blk=0x2aa65fbefa0, offset=<optimized
+>
+out>, bytes=<optimized out>, qiov=0x3ffa002a9c0,
+>
+address@hidden <blk_aio_read_entry>, flags=0,
+>
+cb=0x2aa35340a50 <virtio_blk_rw_complete>, opaque=0x3ffa002a960) at
+>
+block/block-backend.c:1299
+>
+#4  0x000002aa35510376 in blk_aio_preadv (blk=<optimized out>,
+>
+offset=<optimized out>, qiov=<optimized out>, flags=<optimized out>,
+>
+cb=<optimized out>, opaque=0x3ffa002a960) at block/block-backend.c:1392
+>
+#5  0x000002aa3534114e in submit_requests (niov=<optimized out>,
+>
+num_reqs=<optimized out>, start=<optimized out>, mrb=<optimized out>,
+>
+blk=<optimized out>) at
+>
+/usr/src/debug/qemu-2.11.50/hw/block/virtio-blk.c:372
+>
+#6  virtio_blk_submit_multireq (blk=<optimized out>,
+>
+address@hidden) at
+>
+/usr/src/debug/qemu-2.11.50/hw/block/virtio-blk.c:402
+>
+#7  0x000002aa353422e0 in virtio_blk_handle_vq (s=0x2aa6611e7d8,
+>
+vq=0x3ffb0f5f010) at /usr/src/debug/qemu-2.11.50/hw/block/virtio-blk.c:620
+>
+#8  0x000002aa3536655a in virtio_queue_notify_aio_vq
+>
+(address@hidden) at
+>
+/usr/src/debug/qemu-2.11.50/hw/virtio/virtio.c:1515
+>
+#9  0x000002aa35366cd6 in virtio_queue_notify_aio_vq (vq=0x3ffb0f5f010) at
+>
+/usr/src/debug/qemu-2.11.50/hw/virtio/virtio.c:1511
+>
+#10 virtio_queue_host_notifier_aio_poll (opaque=0x3ffb0f5f078) at
+>
+/usr/src/debug/qemu-2.11.50/hw/virtio/virtio.c:2409
+>
+#11 0x000002aa355a8ba4 in run_poll_handlers_once
+>
+(address@hidden) at util/aio-posix.c:497
+>
+#12 0x000002aa355a9b74 in run_poll_handlers (max_ns=<optimized out>,
+>
+ctx=0x2aa65f99310) at util/aio-posix.c:534
+>
+#13 try_poll_mode (blocking=true, ctx=0x2aa65f99310) at util/aio-posix.c:562
+>
+#14 aio_poll (ctx=0x2aa65f99310, address@hidden) at
+>
+util/aio-posix.c:602
+>
+#15 0x000002aa353d2d0a in iothread_run (opaque=0x2aa65f990f0) at
+>
+iothread.c:60
+>
+#16 0x000003ffb0f07e82 in start_thread () from /lib64/libpthread.so.0
+>
+#17 0x000003ffaff91596 in thread_start () from /lib64/libc.so.6
+>
+>
+>
+I don't have much knowledge about i/o threads and the block layer code in
+>
+QEMU, so I would like to report to the community about this issue.
+>
+I believe this very similar to the bug that I reported upstream couple of
+>
+days ago
+>
+(
+https://lists.gnu.org/archive/html/qemu-devel/2018-02/msg04452.html
+).
+>
+>
+Any help would be greatly appreciated.
+>
+>
+Thanks
+>
+Farhan
+>
+signature.asc
+Description:
+PGP signature
+
+On 03/02/2018 04:23 AM, Stefan Hajnoczi wrote:
+On Thu, Mar 01, 2018 at 09:33:35AM -0500, Farhan Ali wrote:
+Hi,
+
+I have been noticing some segfaults for QEMU on s390x, and I have been
+hitting this issue quite reliably (at least once in 10 runs of a test case).
+The qemu version is 2.11.50, and I have systemd created coredumps
+when this happens.
+
+Here is a back trace of the segfaulting thread:
+The backtrace looks normal.
+
+Please post the QEMU command-line and the details of the segfault (which
+memory access faulted?).
+I was able to create another crash today and here is the qemu comand line
+
+/usr/bin/qemu-kvm -name guest=sles,debug-threads=on \
+-S -object
+secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-2-sles/master-key.aes
+\
+-machine s390-ccw-virtio-2.12,accel=kvm,usb=off,dump-guest-core=off \
+-m 4096 -realtime mlock=off -smp 8,sockets=8,cores=1,threads=1 \
+-object iothread,id=iothread1 -object iothread,id=iothread2 -uuid
+b83a596b-3a1a-4ac9-9f3e-d9a4032ee52c \
+-display none -no-user-config -nodefaults -chardev
+socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-2-sles/monitor.sock,server,nowait
+-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc
+-no-shutdown \
+-boot strict=on -drive
+file=/dev/mapper/360050763998b0883980000002400002b,format=raw,if=none,id=drive-virtio-disk0,cache=none,aio=native
+-device
+virtio-blk-ccw,iothread=iothread1,scsi=off,devno=fe.0.0001,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
+-drive
+file=/dev/mapper/360050763998b0883980000002800002f,format=raw,if=none,id=drive-virtio-disk1,cache=none,aio=native
+-device
+virtio-blk-ccw,iothread=iothread2,scsi=off,devno=fe.0.0002,drive=drive-virtio-disk1,id=virtio-disk1
+-netdev tap,fd=24,id=hostnet0,vhost=on,vhostfd=26 -device
+virtio-net-ccw,netdev=hostnet0,id=net0,mac=02:38:a6:36:e8:1f,devno=fe.0.0000
+-chardev pty,id=charconsole0 -device
+sclpconsole,chardev=charconsole0,id=console0 -device
+virtio-balloon-ccw,id=balloon0,devno=fe.3.ffba -msg timestamp=on
+This the latest back trace on the segfaulting thread, and it seems to
+segfault in swapcontext.
+Program terminated with signal SIGSEGV, Segmentation fault.
+#0  0x000003ff8595202c in swapcontext () from /lib64/libc.so.6
+
+
+This is the remaining back trace:
+
+#0  0x000003ff8595202c in swapcontext () from /lib64/libc.so.6
+#1  0x000002aa33b45566 in qemu_coroutine_new () at
+util/coroutine-ucontext.c:164
+#2  0x000002aa33b43eac in qemu_coroutine_create
+(address@hidden <blk_aio_write_entry>,
+address@hidden) at util/qemu-coroutine.c:76
+#3  0x000002aa33a954da in blk_aio_prwv (blk=0x2aa4f0efda0,
+offset=<optimized out>, bytes=<optimized out>, qiov=0x3ff74019080,
+address@hidden <blk_aio_write_entry>, flags=0,
+cb=0x2aa338c62e8 <virtio_blk_rw_complete>, opaque=0x3ff74019020) at
+block/block-backend.c:1299
+#4  0x000002aa33a9563e in blk_aio_pwritev (blk=<optimized out>,
+offset=<optimized out>, qiov=<optimized out>, flags=<optimized out>,
+cb=<optimized out>, opaque=0x3ff74019020) at block/block-backend.c:1400
+#5  0x000002aa338c6a38 in submit_requests (niov=<optimized out>,
+num_reqs=1, start=<optimized out>, mrb=0x3ff831fe6e0, blk=<optimized
+out>) at /usr/src/debug/qemu-2.11.50/hw/block/virtio-blk.c:369
+#6  virtio_blk_submit_multireq (blk=<optimized out>,
+address@hidden) at
+/usr/src/debug/qemu-2.11.50/hw/block/virtio-blk.c:426
+#7  0x000002aa338c7b78 in virtio_blk_handle_vq (s=0x2aa4f2507c8,
+vq=0x3ff869df010) at /usr/src/debug/qemu-2.11.50/hw/block/virtio-blk.c:620
+#8  0x000002aa338ebdf2 in virtio_queue_notify_aio_vq (vq=0x3ff869df010)
+at /usr/src/debug/qemu-2.11.50/hw/virtio/virtio.c:1515
+#9  0x000002aa33b2df46 in aio_dispatch_handlers
+(address@hidden) at util/aio-posix.c:406
+#10 0x000002aa33b2eb50 in aio_poll (ctx=0x2aa4f0ca050,
+address@hidden) at util/aio-posix.c:692
+#11 0x000002aa33957f6a in iothread_run (opaque=0x2aa4f0c9630) at
+iothread.c:60
+#12 0x000003ff86987e82 in start_thread () from /lib64/libpthread.so.0
+#13 0x000003ff85a11596 in thread_start () from /lib64/libc.so.6
+Backtrace stopped: previous frame identical to this frame (corrupt stack?)
+
+On Fri, Mar 02, 2018 at 10:30:57AM -0500, Farhan Ali wrote:
+>
+>
+>
+On 03/02/2018 04:23 AM, Stefan Hajnoczi wrote:
+>
+> On Thu, Mar 01, 2018 at 09:33:35AM -0500, Farhan Ali wrote:
+>
+> > Hi,
+>
+> >
+>
+> > I have been noticing some segfaults for QEMU on s390x, and I have been
+>
+> > hitting this issue quite reliably (at least once in 10 runs of a test
+>
+> > case).
+>
+> > The qemu version is 2.11.50, and I have systemd created coredumps
+>
+> > when this happens.
+>
+> >
+>
+> > Here is a back trace of the segfaulting thread:
+>
+> The backtrace looks normal.
+>
+>
+>
+> Please post the QEMU command-line and the details of the segfault (which
+>
+> memory access faulted?).
+>
+>
+>
+>
+>
+I was able to create another crash today and here is the qemu comand line
+>
+>
+/usr/bin/qemu-kvm -name guest=sles,debug-threads=on \
+>
+-S -object
+>
+secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-2-sles/master-key.aes
+>
+\
+>
+-machine s390-ccw-virtio-2.12,accel=kvm,usb=off,dump-guest-core=off \
+>
+-m 4096 -realtime mlock=off -smp 8,sockets=8,cores=1,threads=1 \
+>
+-object iothread,id=iothread1 -object iothread,id=iothread2 -uuid
+>
+b83a596b-3a1a-4ac9-9f3e-d9a4032ee52c \
+>
+-display none -no-user-config -nodefaults -chardev
+>
+socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-2-sles/monitor.sock,server,nowait
+>
+>
+-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
+>
+\
+>
+-boot strict=on -drive
+>
+file=/dev/mapper/360050763998b0883980000002400002b,format=raw,if=none,id=drive-virtio-disk0,cache=none,aio=native
+>
+-device
+>
+virtio-blk-ccw,iothread=iothread1,scsi=off,devno=fe.0.0001,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
+>
+-drive
+>
+file=/dev/mapper/360050763998b0883980000002800002f,format=raw,if=none,id=drive-virtio-disk1,cache=none,aio=native
+>
+-device
+>
+virtio-blk-ccw,iothread=iothread2,scsi=off,devno=fe.0.0002,drive=drive-virtio-disk1,id=virtio-disk1
+>
+-netdev tap,fd=24,id=hostnet0,vhost=on,vhostfd=26 -device
+>
+virtio-net-ccw,netdev=hostnet0,id=net0,mac=02:38:a6:36:e8:1f,devno=fe.0.0000
+>
+-chardev pty,id=charconsole0 -device
+>
+sclpconsole,chardev=charconsole0,id=console0 -device
+>
+virtio-balloon-ccw,id=balloon0,devno=fe.3.ffba -msg timestamp=on
+>
+>
+>
+This the latest back trace on the segfaulting thread, and it seems to
+>
+segfault in swapcontext.
+>
+>
+Program terminated with signal SIGSEGV, Segmentation fault.
+>
+#0  0x000003ff8595202c in swapcontext () from /lib64/libc.so.6
+Please include the following gdb output:
+
+  (gdb) disas swapcontext
+  (gdb) i r
+
+That way it's possible to see which instruction faulted and which
+registers were being accessed.
+
+>
+This is the remaining back trace:
+>
+>
+#0  0x000003ff8595202c in swapcontext () from /lib64/libc.so.6
+>
+#1  0x000002aa33b45566 in qemu_coroutine_new () at
+>
+util/coroutine-ucontext.c:164
+>
+#2  0x000002aa33b43eac in qemu_coroutine_create
+>
+(address@hidden <blk_aio_write_entry>,
+>
+address@hidden) at util/qemu-coroutine.c:76
+>
+#3  0x000002aa33a954da in blk_aio_prwv (blk=0x2aa4f0efda0, offset=<optimized
+>
+out>, bytes=<optimized out>, qiov=0x3ff74019080,
+>
+address@hidden <blk_aio_write_entry>, flags=0,
+>
+cb=0x2aa338c62e8 <virtio_blk_rw_complete>, opaque=0x3ff74019020) at
+>
+block/block-backend.c:1299
+>
+#4  0x000002aa33a9563e in blk_aio_pwritev (blk=<optimized out>,
+>
+offset=<optimized out>, qiov=<optimized out>, flags=<optimized out>,
+>
+cb=<optimized out>, opaque=0x3ff74019020) at block/block-backend.c:1400
+>
+#5  0x000002aa338c6a38 in submit_requests (niov=<optimized out>, num_reqs=1,
+>
+start=<optimized out>, mrb=0x3ff831fe6e0, blk=<optimized out>) at
+>
+/usr/src/debug/qemu-2.11.50/hw/block/virtio-blk.c:369
+>
+#6  virtio_blk_submit_multireq (blk=<optimized out>,
+>
+address@hidden) at
+>
+/usr/src/debug/qemu-2.11.50/hw/block/virtio-blk.c:426
+>
+#7  0x000002aa338c7b78 in virtio_blk_handle_vq (s=0x2aa4f2507c8,
+>
+vq=0x3ff869df010) at /usr/src/debug/qemu-2.11.50/hw/block/virtio-blk.c:620
+>
+#8  0x000002aa338ebdf2 in virtio_queue_notify_aio_vq (vq=0x3ff869df010) at
+>
+/usr/src/debug/qemu-2.11.50/hw/virtio/virtio.c:1515
+>
+#9  0x000002aa33b2df46 in aio_dispatch_handlers
+>
+(address@hidden) at util/aio-posix.c:406
+>
+#10 0x000002aa33b2eb50 in aio_poll (ctx=0x2aa4f0ca050,
+>
+address@hidden) at util/aio-posix.c:692
+>
+#11 0x000002aa33957f6a in iothread_run (opaque=0x2aa4f0c9630) at
+>
+iothread.c:60
+>
+#12 0x000003ff86987e82 in start_thread () from /lib64/libpthread.so.0
+>
+#13 0x000003ff85a11596 in thread_start () from /lib64/libc.so.6
+>
+Backtrace stopped: previous frame identical to this frame (corrupt stack?)
+>
+signature.asc
+Description:
+PGP signature
+
+On 03/05/2018 06:03 AM, Stefan Hajnoczi wrote:
+Please include the following gdb output:
+
+   (gdb) disas swapcontext
+   (gdb) i r
+
+That way it's possible to see which instruction faulted and which
+registers were being accessed.
+here is the disas out for swapcontext, this is on a coredump with
+debugging symbols enabled for qemu. So the addresses from the previous
+dump is a little different.
+(gdb) disas swapcontext
+Dump of assembler code for function swapcontext:
+   0x000003ff90751fb8 <+0>:       lgr     %r1,%r2
+   0x000003ff90751fbc <+4>:       lgr     %r0,%r3
+   0x000003ff90751fc0 <+8>:       stfpc   248(%r1)
+   0x000003ff90751fc4 <+12>:      std     %f0,256(%r1)
+   0x000003ff90751fc8 <+16>:      std     %f1,264(%r1)
+   0x000003ff90751fcc <+20>:      std     %f2,272(%r1)
+   0x000003ff90751fd0 <+24>:      std     %f3,280(%r1)
+   0x000003ff90751fd4 <+28>:      std     %f4,288(%r1)
+   0x000003ff90751fd8 <+32>:      std     %f5,296(%r1)
+   0x000003ff90751fdc <+36>:      std     %f6,304(%r1)
+   0x000003ff90751fe0 <+40>:      std     %f7,312(%r1)
+   0x000003ff90751fe4 <+44>:      std     %f8,320(%r1)
+   0x000003ff90751fe8 <+48>:      std     %f9,328(%r1)
+   0x000003ff90751fec <+52>:      std     %f10,336(%r1)
+   0x000003ff90751ff0 <+56>:      std     %f11,344(%r1)
+   0x000003ff90751ff4 <+60>:      std     %f12,352(%r1)
+   0x000003ff90751ff8 <+64>:      std     %f13,360(%r1)
+   0x000003ff90751ffc <+68>:      std     %f14,368(%r1)
+   0x000003ff90752000 <+72>:      std     %f15,376(%r1)
+   0x000003ff90752004 <+76>:      slgr    %r2,%r2
+   0x000003ff90752008 <+80>:      stam    %a0,%a15,184(%r1)
+   0x000003ff9075200c <+84>:      stmg    %r0,%r15,56(%r1)
+   0x000003ff90752012 <+90>:      la      %r2,2
+   0x000003ff90752016 <+94>:      lgr     %r5,%r0
+   0x000003ff9075201a <+98>:      la      %r3,384(%r5)
+   0x000003ff9075201e <+102>:     la      %r4,384(%r1)
+   0x000003ff90752022 <+106>:     lghi    %r5,8
+   0x000003ff90752026 <+110>:     svc     175
+   0x000003ff90752028 <+112>:     lgr     %r5,%r0
+=> 0x000003ff9075202c <+116>:  lfpc    248(%r5)
+   0x000003ff90752030 <+120>:     ld      %f0,256(%r5)
+   0x000003ff90752034 <+124>:     ld      %f1,264(%r5)
+   0x000003ff90752038 <+128>:     ld      %f2,272(%r5)
+   0x000003ff9075203c <+132>:     ld      %f3,280(%r5)
+   0x000003ff90752040 <+136>:     ld      %f4,288(%r5)
+   0x000003ff90752044 <+140>:     ld      %f5,296(%r5)
+   0x000003ff90752048 <+144>:     ld      %f6,304(%r5)
+   0x000003ff9075204c <+148>:     ld      %f7,312(%r5)
+   0x000003ff90752050 <+152>:     ld      %f8,320(%r5)
+   0x000003ff90752054 <+156>:     ld      %f9,328(%r5)
+   0x000003ff90752058 <+160>:     ld      %f10,336(%r5)
+   0x000003ff9075205c <+164>:     ld      %f11,344(%r5)
+   0x000003ff90752060 <+168>:     ld      %f12,352(%r5)
+   0x000003ff90752064 <+172>:     ld      %f13,360(%r5)
+   0x000003ff90752068 <+176>:     ld      %f14,368(%r5)
+   0x000003ff9075206c <+180>:     ld      %f15,376(%r5)
+   0x000003ff90752070 <+184>:     lam     %a2,%a15,192(%r5)
+   0x000003ff90752074 <+188>:     lmg     %r0,%r15,56(%r5)
+   0x000003ff9075207a <+194>:     br      %r14
+End of assembler dump.
+
+(gdb) i r
+r0             0x0      0
+r1             0x3ff8fe7de40    4396165881408
+r2             0x0      0
+r3             0x3ff8fe7e1c0    4396165882304
+r4             0x3ff8fe7dfc0    4396165881792
+r5             0x0      0
+r6             0xffffffff88004880       18446744071696304256
+r7             0x3ff880009e0    4396033247712
+r8             0x27ff89000      10736930816
+r9             0x3ff88001460    4396033250400
+r10            0x1000   4096
+r11            0x1261be0        19274720
+r12            0x3ff88001e00    4396033252864
+r13            0x14d0bc0        21826496
+r14            0x1312ac8        19999432
+r15            0x3ff8fe7dc80    4396165880960
+pc             0x3ff9075202c    0x3ff9075202c <swapcontext+116>
+cc             0x2      2
+
+On 03/05/2018 07:45 PM, Farhan Ali wrote:
+>
+>
+>
+On 03/05/2018 06:03 AM, Stefan Hajnoczi wrote:
+>
+> Please include the following gdb output:
+>
+>
+>
+>    (gdb) disas swapcontext
+>
+>    (gdb) i r
+>
+>
+>
+> That way it's possible to see which instruction faulted and which
+>
+> registers were being accessed.
+>
+>
+>
+here is the disas out for swapcontext, this is on a coredump with debugging
+>
+symbols enabled for qemu. So the addresses from the previous dump is a little
+>
+different.
+>
+>
+>
+(gdb) disas swapcontext
+>
+Dump of assembler code for function swapcontext:
+>
+   0x000003ff90751fb8 <+0>:    lgr    %r1,%r2
+>
+   0x000003ff90751fbc <+4>:    lgr    %r0,%r3
+>
+   0x000003ff90751fc0 <+8>:    stfpc    248(%r1)
+>
+   0x000003ff90751fc4 <+12>:    std    %f0,256(%r1)
+>
+   0x000003ff90751fc8 <+16>:    std    %f1,264(%r1)
+>
+   0x000003ff90751fcc <+20>:    std    %f2,272(%r1)
+>
+   0x000003ff90751fd0 <+24>:    std    %f3,280(%r1)
+>
+   0x000003ff90751fd4 <+28>:    std    %f4,288(%r1)
+>
+   0x000003ff90751fd8 <+32>:    std    %f5,296(%r1)
+>
+   0x000003ff90751fdc <+36>:    std    %f6,304(%r1)
+>
+   0x000003ff90751fe0 <+40>:    std    %f7,312(%r1)
+>
+   0x000003ff90751fe4 <+44>:    std    %f8,320(%r1)
+>
+   0x000003ff90751fe8 <+48>:    std    %f9,328(%r1)
+>
+   0x000003ff90751fec <+52>:    std    %f10,336(%r1)
+>
+   0x000003ff90751ff0 <+56>:    std    %f11,344(%r1)
+>
+   0x000003ff90751ff4 <+60>:    std    %f12,352(%r1)
+>
+   0x000003ff90751ff8 <+64>:    std    %f13,360(%r1)
+>
+   0x000003ff90751ffc <+68>:    std    %f14,368(%r1)
+>
+   0x000003ff90752000 <+72>:    std    %f15,376(%r1)
+>
+   0x000003ff90752004 <+76>:    slgr    %r2,%r2
+>
+   0x000003ff90752008 <+80>:    stam    %a0,%a15,184(%r1)
+>
+   0x000003ff9075200c <+84>:    stmg    %r0,%r15,56(%r1)
+>
+   0x000003ff90752012 <+90>:    la    %r2,2
+>
+   0x000003ff90752016 <+94>:    lgr    %r5,%r0
+>
+   0x000003ff9075201a <+98>:    la    %r3,384(%r5)
+>
+   0x000003ff9075201e <+102>:    la    %r4,384(%r1)
+>
+   0x000003ff90752022 <+106>:    lghi    %r5,8
+>
+   0x000003ff90752026 <+110>:    svc    175
+sys_rt_sigprocmask. r0 should not be changed by the system call.
+
+>
+   0x000003ff90752028 <+112>:    lgr    %r5,%r0
+>
+=> 0x000003ff9075202c <+116>:    lfpc    248(%r5)
+so r5 is zero and it was loaded from r0. r0 was loaded from r3 (which is the 
+2nd parameter to this
+function). Now this is odd.
+
+>
+   0x000003ff90752030 <+120>:    ld    %f0,256(%r5)
+>
+   0x000003ff90752034 <+124>:    ld    %f1,264(%r5)
+>
+   0x000003ff90752038 <+128>:    ld    %f2,272(%r5)
+>
+   0x000003ff9075203c <+132>:    ld    %f3,280(%r5)
+>
+   0x000003ff90752040 <+136>:    ld    %f4,288(%r5)
+>
+   0x000003ff90752044 <+140>:    ld    %f5,296(%r5)
+>
+   0x000003ff90752048 <+144>:    ld    %f6,304(%r5)
+>
+   0x000003ff9075204c <+148>:    ld    %f7,312(%r5)
+>
+   0x000003ff90752050 <+152>:    ld    %f8,320(%r5)
+>
+   0x000003ff90752054 <+156>:    ld    %f9,328(%r5)
+>
+   0x000003ff90752058 <+160>:    ld    %f10,336(%r5)
+>
+   0x000003ff9075205c <+164>:    ld    %f11,344(%r5)
+>
+   0x000003ff90752060 <+168>:    ld    %f12,352(%r5)
+>
+   0x000003ff90752064 <+172>:    ld    %f13,360(%r5)
+>
+   0x000003ff90752068 <+176>:    ld    %f14,368(%r5)
+>
+   0x000003ff9075206c <+180>:    ld    %f15,376(%r5)
+>
+   0x000003ff90752070 <+184>:    lam    %a2,%a15,192(%r5)
+>
+   0x000003ff90752074 <+188>:    lmg    %r0,%r15,56(%r5)
+>
+   0x000003ff9075207a <+194>:    br    %r14
+>
+End of assembler dump.
+>
+>
+(gdb) i r
+>
+r0             0x0    0
+>
+r1             0x3ff8fe7de40    4396165881408
+>
+r2             0x0    0
+>
+r3             0x3ff8fe7e1c0    4396165882304
+>
+r4             0x3ff8fe7dfc0    4396165881792
+>
+r5             0x0    0
+>
+r6             0xffffffff88004880    18446744071696304256
+>
+r7             0x3ff880009e0    4396033247712
+>
+r8             0x27ff89000    10736930816
+>
+r9             0x3ff88001460    4396033250400
+>
+r10            0x1000    4096
+>
+r11            0x1261be0    19274720
+>
+r12            0x3ff88001e00    4396033252864
+>
+r13            0x14d0bc0    21826496
+>
+r14            0x1312ac8    19999432
+>
+r15            0x3ff8fe7dc80    4396165880960
+>
+pc             0x3ff9075202c    0x3ff9075202c <swapcontext+116>
+>
+cc             0x2    2
+
+On 5 March 2018 at 18:54, Christian Borntraeger <address@hidden> wrote:
+>
+>
+>
+On 03/05/2018 07:45 PM, Farhan Ali wrote:
+>
+>    0x000003ff90752026 <+110>:    svc    175
+>
+>
+sys_rt_sigprocmask. r0 should not be changed by the system call.
+>
+>
+>    0x000003ff90752028 <+112>:    lgr    %r5,%r0
+>
+> => 0x000003ff9075202c <+116>:    lfpc    248(%r5)
+>
+>
+so r5 is zero and it was loaded from r0. r0 was loaded from r3 (which is the
+>
+2nd parameter to this
+>
+function). Now this is odd.
+...particularly given that the only place we call swapcontext()
+the second parameter is always the address of a local variable
+and can't be 0...
+
+thanks
+-- PMM
+
+Do you happen to run with a recent host kernel that has 
+
+commit 7041d28115e91f2144f811ffe8a195c696b1e1d0
+    s390: scrub registers on kernel entry and KVM exit
+
+
+
+
+
+Can you run with this on top
+diff --git a/arch/s390/kernel/entry.S b/arch/s390/kernel/entry.S
+index 13a133a6015c..d6dc0e5e8f74 100644
+--- a/arch/s390/kernel/entry.S
++++ b/arch/s390/kernel/entry.S
+@@ -426,13 +426,13 @@ ENTRY(system_call)
+        UPDATE_VTIME %r8,%r9,__LC_SYNC_ENTER_TIMER
+        BPENTER __TI_flags(%r12),_TIF_ISOLATE_BP
+        stmg    %r0,%r7,__PT_R0(%r11)
+-       # clear user controlled register to prevent speculative use
+-       xgr     %r0,%r0
+        mvc     __PT_R8(64,%r11),__LC_SAVE_AREA_SYNC
+        mvc     __PT_PSW(16,%r11),__LC_SVC_OLD_PSW
+        mvc     __PT_INT_CODE(4,%r11),__LC_SVC_ILC
+        stg     %r14,__PT_FLAGS(%r11)
+ .Lsysc_do_svc:
++       # clear user controlled register to prevent speculative use
++       xgr     %r0,%r0
+        # load address of system call table
+        lg      %r10,__THREAD_sysc_table(%r13,%r12)
+        llgh    %r8,__PT_INT_CODE+2(%r11)
+
+
+To me it looks like that the critical section cleanup (interrupt during system 
+call entry) might
+save the registers again into ptregs but we have already zeroed out r0.
+This patch moves the clearing of r0 after sysc_do_svc, which should fix the 
+critical
+section cleanup.
+
+Adding Martin and Heiko. Will spin a patch.
+
+
+On 03/05/2018 07:54 PM, Christian Borntraeger wrote:
+>
+>
+>
+On 03/05/2018 07:45 PM, Farhan Ali wrote:
+>
+>
+>
+>
+>
+> On 03/05/2018 06:03 AM, Stefan Hajnoczi wrote:
+>
+>> Please include the following gdb output:
+>
+>>
+>
+>>    (gdb) disas swapcontext
+>
+>>    (gdb) i r
+>
+>>
+>
+>> That way it's possible to see which instruction faulted and which
+>
+>> registers were being accessed.
+>
+>
+>
+>
+>
+> here is the disas out for swapcontext, this is on a coredump with debugging
+>
+> symbols enabled for qemu. So the addresses from the previous dump is a
+>
+> little different.
+>
+>
+>
+>
+>
+> (gdb) disas swapcontext
+>
+> Dump of assembler code for function swapcontext:
+>
+>    0x000003ff90751fb8 <+0>:    lgr    %r1,%r2
+>
+>    0x000003ff90751fbc <+4>:    lgr    %r0,%r3
+>
+>    0x000003ff90751fc0 <+8>:    stfpc    248(%r1)
+>
+>    0x000003ff90751fc4 <+12>:    std    %f0,256(%r1)
+>
+>    0x000003ff90751fc8 <+16>:    std    %f1,264(%r1)
+>
+>    0x000003ff90751fcc <+20>:    std    %f2,272(%r1)
+>
+>    0x000003ff90751fd0 <+24>:    std    %f3,280(%r1)
+>
+>    0x000003ff90751fd4 <+28>:    std    %f4,288(%r1)
+>
+>    0x000003ff90751fd8 <+32>:    std    %f5,296(%r1)
+>
+>    0x000003ff90751fdc <+36>:    std    %f6,304(%r1)
+>
+>    0x000003ff90751fe0 <+40>:    std    %f7,312(%r1)
+>
+>    0x000003ff90751fe4 <+44>:    std    %f8,320(%r1)
+>
+>    0x000003ff90751fe8 <+48>:    std    %f9,328(%r1)
+>
+>    0x000003ff90751fec <+52>:    std    %f10,336(%r1)
+>
+>    0x000003ff90751ff0 <+56>:    std    %f11,344(%r1)
+>
+>    0x000003ff90751ff4 <+60>:    std    %f12,352(%r1)
+>
+>    0x000003ff90751ff8 <+64>:    std    %f13,360(%r1)
+>
+>    0x000003ff90751ffc <+68>:    std    %f14,368(%r1)
+>
+>    0x000003ff90752000 <+72>:    std    %f15,376(%r1)
+>
+>    0x000003ff90752004 <+76>:    slgr    %r2,%r2
+>
+>    0x000003ff90752008 <+80>:    stam    %a0,%a15,184(%r1)
+>
+>    0x000003ff9075200c <+84>:    stmg    %r0,%r15,56(%r1)
+>
+>    0x000003ff90752012 <+90>:    la    %r2,2
+>
+>    0x000003ff90752016 <+94>:    lgr    %r5,%r0
+>
+>    0x000003ff9075201a <+98>:    la    %r3,384(%r5)
+>
+>    0x000003ff9075201e <+102>:    la    %r4,384(%r1)
+>
+>    0x000003ff90752022 <+106>:    lghi    %r5,8
+>
+>    0x000003ff90752026 <+110>:    svc    175
+>
+>
+sys_rt_sigprocmask. r0 should not be changed by the system call.
+>
+>
+>    0x000003ff90752028 <+112>:    lgr    %r5,%r0
+>
+> => 0x000003ff9075202c <+116>:    lfpc    248(%r5)
+>
+>
+so r5 is zero and it was loaded from r0. r0 was loaded from r3 (which is the
+>
+2nd parameter to this
+>
+function). Now this is odd.
+>
+>
+>    0x000003ff90752030 <+120>:    ld    %f0,256(%r5)
+>
+>    0x000003ff90752034 <+124>:    ld    %f1,264(%r5)
+>
+>    0x000003ff90752038 <+128>:    ld    %f2,272(%r5)
+>
+>    0x000003ff9075203c <+132>:    ld    %f3,280(%r5)
+>
+>    0x000003ff90752040 <+136>:    ld    %f4,288(%r5)
+>
+>    0x000003ff90752044 <+140>:    ld    %f5,296(%r5)
+>
+>    0x000003ff90752048 <+144>:    ld    %f6,304(%r5)
+>
+>    0x000003ff9075204c <+148>:    ld    %f7,312(%r5)
+>
+>    0x000003ff90752050 <+152>:    ld    %f8,320(%r5)
+>
+>    0x000003ff90752054 <+156>:    ld    %f9,328(%r5)
+>
+>    0x000003ff90752058 <+160>:    ld    %f10,336(%r5)
+>
+>    0x000003ff9075205c <+164>:    ld    %f11,344(%r5)
+>
+>    0x000003ff90752060 <+168>:    ld    %f12,352(%r5)
+>
+>    0x000003ff90752064 <+172>:    ld    %f13,360(%r5)
+>
+>    0x000003ff90752068 <+176>:    ld    %f14,368(%r5)
+>
+>    0x000003ff9075206c <+180>:    ld    %f15,376(%r5)
+>
+>    0x000003ff90752070 <+184>:    lam    %a2,%a15,192(%r5)
+>
+>    0x000003ff90752074 <+188>:    lmg    %r0,%r15,56(%r5)
+>
+>    0x000003ff9075207a <+194>:    br    %r14
+>
+> End of assembler dump.
+>
+>
+>
+> (gdb) i r
+>
+> r0             0x0    0
+>
+> r1             0x3ff8fe7de40    4396165881408
+>
+> r2             0x0    0
+>
+> r3             0x3ff8fe7e1c0    4396165882304
+>
+> r4             0x3ff8fe7dfc0    4396165881792
+>
+> r5             0x0    0
+>
+> r6             0xffffffff88004880    18446744071696304256
+>
+> r7             0x3ff880009e0    4396033247712
+>
+> r8             0x27ff89000    10736930816
+>
+> r9             0x3ff88001460    4396033250400
+>
+> r10            0x1000    4096
+>
+> r11            0x1261be0    19274720
+>
+> r12            0x3ff88001e00    4396033252864
+>
+> r13            0x14d0bc0    21826496
+>
+> r14            0x1312ac8    19999432
+>
+> r15            0x3ff8fe7dc80    4396165880960
+>
+> pc             0x3ff9075202c    0x3ff9075202c <swapcontext+116>
+>
+> cc             0x2    2
+
+On 03/05/2018 02:08 PM, Christian Borntraeger wrote:
+Do you happen to run with a recent host kernel that has
+
+commit 7041d28115e91f2144f811ffe8a195c696b1e1d0
+     s390: scrub registers on kernel entry and KVM exit
+Yes.
+Can you run with this on top
+diff --git a/arch/s390/kernel/entry.S b/arch/s390/kernel/entry.S
+index 13a133a6015c..d6dc0e5e8f74 100644
+--- a/arch/s390/kernel/entry.S
++++ b/arch/s390/kernel/entry.S
+@@ -426,13 +426,13 @@ ENTRY(system_call)
+         UPDATE_VTIME %r8,%r9,__LC_SYNC_ENTER_TIMER
+         BPENTER __TI_flags(%r12),_TIF_ISOLATE_BP
+         stmg    %r0,%r7,__PT_R0(%r11)
+-       # clear user controlled register to prevent speculative use
+-       xgr     %r0,%r0
+         mvc     __PT_R8(64,%r11),__LC_SAVE_AREA_SYNC
+         mvc     __PT_PSW(16,%r11),__LC_SVC_OLD_PSW
+         mvc     __PT_INT_CODE(4,%r11),__LC_SVC_ILC
+         stg     %r14,__PT_FLAGS(%r11)
+  .Lsysc_do_svc:
++       # clear user controlled register to prevent speculative use
++       xgr     %r0,%r0
+         # load address of system call table
+         lg      %r10,__THREAD_sysc_table(%r13,%r12)
+         llgh    %r8,__PT_INT_CODE+2(%r11)
+
+
+To me it looks like that the critical section cleanup (interrupt during system 
+call entry) might
+save the registers again into ptregs but we have already zeroed out r0.
+This patch moves the clearing of r0 after sysc_do_svc, which should fix the 
+critical
+section cleanup.
+Okay I will run with this.
+Adding Martin and Heiko. Will spin a patch.
+
+
+On 03/05/2018 07:54 PM, Christian Borntraeger wrote:
+On 03/05/2018 07:45 PM, Farhan Ali wrote:
+On 03/05/2018 06:03 AM, Stefan Hajnoczi wrote:
+Please include the following gdb output:
+
+    (gdb) disas swapcontext
+    (gdb) i r
+
+That way it's possible to see which instruction faulted and which
+registers were being accessed.
+here is the disas out for swapcontext, this is on a coredump with debugging 
+symbols enabled for qemu. So the addresses from the previous dump is a little 
+different.
+
+
+(gdb) disas swapcontext
+Dump of assembler code for function swapcontext:
+    0x000003ff90751fb8 <+0>:    lgr    %r1,%r2
+    0x000003ff90751fbc <+4>:    lgr    %r0,%r3
+    0x000003ff90751fc0 <+8>:    stfpc    248(%r1)
+    0x000003ff90751fc4 <+12>:    std    %f0,256(%r1)
+    0x000003ff90751fc8 <+16>:    std    %f1,264(%r1)
+    0x000003ff90751fcc <+20>:    std    %f2,272(%r1)
+    0x000003ff90751fd0 <+24>:    std    %f3,280(%r1)
+    0x000003ff90751fd4 <+28>:    std    %f4,288(%r1)
+    0x000003ff90751fd8 <+32>:    std    %f5,296(%r1)
+    0x000003ff90751fdc <+36>:    std    %f6,304(%r1)
+    0x000003ff90751fe0 <+40>:    std    %f7,312(%r1)
+    0x000003ff90751fe4 <+44>:    std    %f8,320(%r1)
+    0x000003ff90751fe8 <+48>:    std    %f9,328(%r1)
+    0x000003ff90751fec <+52>:    std    %f10,336(%r1)
+    0x000003ff90751ff0 <+56>:    std    %f11,344(%r1)
+    0x000003ff90751ff4 <+60>:    std    %f12,352(%r1)
+    0x000003ff90751ff8 <+64>:    std    %f13,360(%r1)
+    0x000003ff90751ffc <+68>:    std    %f14,368(%r1)
+    0x000003ff90752000 <+72>:    std    %f15,376(%r1)
+    0x000003ff90752004 <+76>:    slgr    %r2,%r2
+    0x000003ff90752008 <+80>:    stam    %a0,%a15,184(%r1)
+    0x000003ff9075200c <+84>:    stmg    %r0,%r15,56(%r1)
+    0x000003ff90752012 <+90>:    la    %r2,2
+    0x000003ff90752016 <+94>:    lgr    %r5,%r0
+    0x000003ff9075201a <+98>:    la    %r3,384(%r5)
+    0x000003ff9075201e <+102>:    la    %r4,384(%r1)
+    0x000003ff90752022 <+106>:    lghi    %r5,8
+    0x000003ff90752026 <+110>:    svc    175
+sys_rt_sigprocmask. r0 should not be changed by the system call.
+   0x000003ff90752028 <+112>:    lgr    %r5,%r0
+=> 0x000003ff9075202c <+116>:    lfpc    248(%r5)
+so r5 is zero and it was loaded from r0. r0 was loaded from r3 (which is the 
+2nd parameter to this
+function). Now this is odd.
+   0x000003ff90752030 <+120>:    ld    %f0,256(%r5)
+    0x000003ff90752034 <+124>:    ld    %f1,264(%r5)
+    0x000003ff90752038 <+128>:    ld    %f2,272(%r5)
+    0x000003ff9075203c <+132>:    ld    %f3,280(%r5)
+    0x000003ff90752040 <+136>:    ld    %f4,288(%r5)
+    0x000003ff90752044 <+140>:    ld    %f5,296(%r5)
+    0x000003ff90752048 <+144>:    ld    %f6,304(%r5)
+    0x000003ff9075204c <+148>:    ld    %f7,312(%r5)
+    0x000003ff90752050 <+152>:    ld    %f8,320(%r5)
+    0x000003ff90752054 <+156>:    ld    %f9,328(%r5)
+    0x000003ff90752058 <+160>:    ld    %f10,336(%r5)
+    0x000003ff9075205c <+164>:    ld    %f11,344(%r5)
+    0x000003ff90752060 <+168>:    ld    %f12,352(%r5)
+    0x000003ff90752064 <+172>:    ld    %f13,360(%r5)
+    0x000003ff90752068 <+176>:    ld    %f14,368(%r5)
+    0x000003ff9075206c <+180>:    ld    %f15,376(%r5)
+    0x000003ff90752070 <+184>:    lam    %a2,%a15,192(%r5)
+    0x000003ff90752074 <+188>:    lmg    %r0,%r15,56(%r5)
+    0x000003ff9075207a <+194>:    br    %r14
+End of assembler dump.
+
+(gdb) i r
+r0             0x0    0
+r1             0x3ff8fe7de40    4396165881408
+r2             0x0    0
+r3             0x3ff8fe7e1c0    4396165882304
+r4             0x3ff8fe7dfc0    4396165881792
+r5             0x0    0
+r6             0xffffffff88004880    18446744071696304256
+r7             0x3ff880009e0    4396033247712
+r8             0x27ff89000    10736930816
+r9             0x3ff88001460    4396033250400
+r10            0x1000    4096
+r11            0x1261be0    19274720
+r12            0x3ff88001e00    4396033252864
+r13            0x14d0bc0    21826496
+r14            0x1312ac8    19999432
+r15            0x3ff8fe7dc80    4396165880960
+pc             0x3ff9075202c    0x3ff9075202c <swapcontext+116>
+cc             0x2    2
+
+On Mon, 5 Mar 2018 20:08:45 +0100
+Christian Borntraeger <address@hidden> wrote:
+
+>
+Do you happen to run with a recent host kernel that has
+>
+>
+commit 7041d28115e91f2144f811ffe8a195c696b1e1d0
+>
+s390: scrub registers on kernel entry and KVM exit
+>
+>
+Can you run with this on top
+>
+diff --git a/arch/s390/kernel/entry.S b/arch/s390/kernel/entry.S
+>
+index 13a133a6015c..d6dc0e5e8f74 100644
+>
+--- a/arch/s390/kernel/entry.S
+>
++++ b/arch/s390/kernel/entry.S
+>
+@@ -426,13 +426,13 @@ ENTRY(system_call)
+>
+UPDATE_VTIME %r8,%r9,__LC_SYNC_ENTER_TIMER
+>
+BPENTER __TI_flags(%r12),_TIF_ISOLATE_BP
+>
+stmg    %r0,%r7,__PT_R0(%r11)
+>
+-       # clear user controlled register to prevent speculative use
+>
+-       xgr     %r0,%r0
+>
+mvc     __PT_R8(64,%r11),__LC_SAVE_AREA_SYNC
+>
+mvc     __PT_PSW(16,%r11),__LC_SVC_OLD_PSW
+>
+mvc     __PT_INT_CODE(4,%r11),__LC_SVC_ILC
+>
+stg     %r14,__PT_FLAGS(%r11)
+>
+.Lsysc_do_svc:
+>
++       # clear user controlled register to prevent speculative use
+>
++       xgr     %r0,%r0
+>
+# load address of system call table
+>
+lg      %r10,__THREAD_sysc_table(%r13,%r12)
+>
+llgh    %r8,__PT_INT_CODE+2(%r11)
+>
+>
+>
+To me it looks like that the critical section cleanup (interrupt during
+>
+system call entry) might
+>
+save the registers again into ptregs but we have already zeroed out r0.
+>
+This patch moves the clearing of r0 after sysc_do_svc, which should fix the
+>
+critical
+>
+section cleanup.
+>
+>
+Adding Martin and Heiko. Will spin a patch.
+Argh, yes. Thanks Chrisitan, this is it. I have been searching for the bug
+for days now. The point is that if the system call handler is interrupted
+after the xgr but before .Lsysc_do_svc the code at .Lcleanup_system_call 
+repeats the stmg for %r0-%r7 but now %r0 is already zero.
+
+Please commit a patch for this and I'll will queue it up immediately.
+
+-- 
+blue skies,
+   Martin.
+
+"Reality continues to ruin my life." - Calvin.
+
+On 03/06/2018 01:34 AM, Martin Schwidefsky wrote:
+On Mon, 5 Mar 2018 20:08:45 +0100
+Christian Borntraeger <address@hidden> wrote:
+Do you happen to run with a recent host kernel that has
+
+commit 7041d28115e91f2144f811ffe8a195c696b1e1d0
+     s390: scrub registers on kernel entry and KVM exit
+
+Can you run with this on top
+diff --git a/arch/s390/kernel/entry.S b/arch/s390/kernel/entry.S
+index 13a133a6015c..d6dc0e5e8f74 100644
+--- a/arch/s390/kernel/entry.S
++++ b/arch/s390/kernel/entry.S
+@@ -426,13 +426,13 @@ ENTRY(system_call)
+         UPDATE_VTIME %r8,%r9,__LC_SYNC_ENTER_TIMER
+         BPENTER __TI_flags(%r12),_TIF_ISOLATE_BP
+         stmg    %r0,%r7,__PT_R0(%r11)
+-       # clear user controlled register to prevent speculative use
+-       xgr     %r0,%r0
+         mvc     __PT_R8(64,%r11),__LC_SAVE_AREA_SYNC
+         mvc     __PT_PSW(16,%r11),__LC_SVC_OLD_PSW
+         mvc     __PT_INT_CODE(4,%r11),__LC_SVC_ILC
+         stg     %r14,__PT_FLAGS(%r11)
+  .Lsysc_do_svc:
++       # clear user controlled register to prevent speculative use
++       xgr     %r0,%r0
+         # load address of system call table
+         lg      %r10,__THREAD_sysc_table(%r13,%r12)
+         llgh    %r8,__PT_INT_CODE+2(%r11)
+
+
+To me it looks like that the critical section cleanup (interrupt during system 
+call entry) might
+save the registers again into ptregs but we have already zeroed out r0.
+This patch moves the clearing of r0 after sysc_do_svc, which should fix the 
+critical
+section cleanup.
+
+Adding Martin and Heiko. Will spin a patch.
+Argh, yes. Thanks Chrisitan, this is it. I have been searching for the bug
+for days now. The point is that if the system call handler is interrupted
+after the xgr but before .Lsysc_do_svc the code at .Lcleanup_system_call
+repeats the stmg for %r0-%r7 but now %r0 is already zero.
+
+Please commit a patch for this and I'll will queue it up immediately.
+This patch does fix the QEMU crash. I haven't seen the crash after
+running the test case for more than a day. Thanks to everyone for taking
+a look at this problem :)
+Thanks
+Farhan
+
diff --git a/results/classifier/009/other/23270873 b/results/classifier/009/other/23270873
new file mode 100644
index 000000000..0140ac0a4
--- /dev/null
+++ b/results/classifier/009/other/23270873
@@ -0,0 +1,702 @@
+other: 0.839
+boot: 0.830
+vnc: 0.820
+device: 0.810
+KVM: 0.803
+permissions: 0.802
+debug: 0.788
+network: 0.768
+graphic: 0.764
+socket: 0.758
+semantic: 0.752
+performance: 0.744
+PID: 0.731
+files: 0.730
+
+[Qemu-devel] [BUG?] aio_get_linux_aio: Assertion `ctx->linux_aio' failed
+
+Hi,
+
+I am seeing some strange QEMU assertion failures for qemu on s390x,
+which prevents a guest from starting.
+
+Git bisecting points to the following commit as the source of the error.
+
+commit ed6e2161715c527330f936d44af4c547f25f687e
+Author: Nishanth Aravamudan <address@hidden>
+Date:   Fri Jun 22 12:37:00 2018 -0700
+
+    linux-aio: properly bubble up errors from initialization
+
+    laio_init() can fail for a couple of reasons, which will lead to a NULL
+    pointer dereference in laio_attach_aio_context().
+
+    To solve this, add a aio_setup_linux_aio() function which is called
+    early in raw_open_common. If this fails, propagate the error up. The
+    signature of aio_get_linux_aio() was not modified, because it seems
+    preferable to return the actual errno from the possible failing
+    initialization calls.
+
+    Additionally, when the AioContext changes, we need to associate a
+    LinuxAioState with the new AioContext. Use the bdrv_attach_aio_context
+    callback and call the new aio_setup_linux_aio(), which will allocate a
+new AioContext if needed, and return errors on failures. If it
+fails for
+any reason, fallback to threaded AIO with an error message, as the
+    device is already in-use by the guest.
+
+    Add an assert that aio_get_linux_aio() cannot return NULL.
+
+    Signed-off-by: Nishanth Aravamudan <address@hidden>
+    Message-id: address@hidden
+    Signed-off-by: Stefan Hajnoczi <address@hidden>
+Not sure what is causing this assertion to fail. Here is the qemu
+command line of the guest, from qemu log, which throws this error:
+LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
+QEMU_AUDIO_DRV=none /usr/local/bin/qemu-system-s390x -name
+guest=rt_vm1,debug-threads=on -S -object
+secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-21-rt_vm1/master-key.aes
+-machine s390-ccw-virtio-2.12,accel=kvm,usb=off,dump-guest-core=off -m
+1024 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -object
+iothread,id=iothread1 -uuid 0cde16cd-091d-41bd-9ac2-5243df5c9a0d
+-display none -no-user-config -nodefaults -chardev
+socket,id=charmonitor,fd=28,server,nowait -mon
+chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
+-boot strict=on -drive
+file=/dev/mapper/360050763998b0883980000002a000031,format=raw,if=none,id=drive-virtio-disk0,cache=none,aio=native
+-device
+virtio-blk-ccw,iothread=iothread1,scsi=off,devno=fe.0.0001,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on
+-netdev tap,fd=30,id=hostnet0,vhost=on,vhostfd=31 -device
+virtio-net-ccw,netdev=hostnet0,id=net0,mac=02:3a:c8:67:95:84,devno=fe.0.0000
+-netdev tap,fd=32,id=hostnet1,vhost=on,vhostfd=33 -device
+virtio-net-ccw,netdev=hostnet1,id=net1,mac=52:54:00:2a:e5:08,devno=fe.0.0002
+-chardev pty,id=charconsole0 -device
+sclpconsole,chardev=charconsole0,id=console0 -device
+virtio-balloon-ccw,id=balloon0,devno=fe.3.ffba -sandbox
+on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny
+-msg timestamp=on
+2018-07-17 15:48:42.252+0000: Domain id=21 is tainted: high-privileges
+2018-07-17T15:48:42.279380Z qemu-system-s390x: -chardev
+pty,id=charconsole0: char device redirected to /dev/pts/3 (label
+charconsole0)
+qemu-system-s390x: util/async.c:339: aio_get_linux_aio: Assertion
+`ctx->linux_aio' failed.
+2018-07-17 15:48:43.309+0000: shutting down, reason=failed
+
+
+Any help debugging this would be greatly appreciated.
+
+Thank you
+Farhan
+
+On 17.07.2018 [13:25:53 -0400], Farhan Ali wrote:
+>
+Hi,
+>
+>
+I am seeing some strange QEMU assertion failures for qemu on s390x,
+>
+which prevents a guest from starting.
+>
+>
+Git bisecting points to the following commit as the source of the error.
+>
+>
+commit ed6e2161715c527330f936d44af4c547f25f687e
+>
+Author: Nishanth Aravamudan <address@hidden>
+>
+Date:   Fri Jun 22 12:37:00 2018 -0700
+>
+>
+linux-aio: properly bubble up errors from initialization
+>
+>
+laio_init() can fail for a couple of reasons, which will lead to a NULL
+>
+pointer dereference in laio_attach_aio_context().
+>
+>
+To solve this, add a aio_setup_linux_aio() function which is called
+>
+early in raw_open_common. If this fails, propagate the error up. The
+>
+signature of aio_get_linux_aio() was not modified, because it seems
+>
+preferable to return the actual errno from the possible failing
+>
+initialization calls.
+>
+>
+Additionally, when the AioContext changes, we need to associate a
+>
+LinuxAioState with the new AioContext. Use the bdrv_attach_aio_context
+>
+callback and call the new aio_setup_linux_aio(), which will allocate a
+>
+new AioContext if needed, and return errors on failures. If it fails for
+>
+any reason, fallback to threaded AIO with an error message, as the
+>
+device is already in-use by the guest.
+>
+>
+Add an assert that aio_get_linux_aio() cannot return NULL.
+>
+>
+Signed-off-by: Nishanth Aravamudan <address@hidden>
+>
+Message-id: address@hidden
+>
+Signed-off-by: Stefan Hajnoczi <address@hidden>
+>
+>
+>
+Not sure what is causing this assertion to fail. Here is the qemu command
+>
+line of the guest, from qemu log, which throws this error:
+>
+>
+>
+LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
+>
+QEMU_AUDIO_DRV=none /usr/local/bin/qemu-system-s390x -name
+>
+guest=rt_vm1,debug-threads=on -S -object
+>
+secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-21-rt_vm1/master-key.aes
+>
+-machine s390-ccw-virtio-2.12,accel=kvm,usb=off,dump-guest-core=off -m 1024
+>
+-realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -object
+>
+iothread,id=iothread1 -uuid 0cde16cd-091d-41bd-9ac2-5243df5c9a0d -display
+>
+none -no-user-config -nodefaults -chardev
+>
+socket,id=charmonitor,fd=28,server,nowait -mon
+>
+chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot
+>
+strict=on -drive
+>
+file=/dev/mapper/360050763998b0883980000002a000031,format=raw,if=none,id=drive-virtio-disk0,cache=none,aio=native
+>
+-device
+>
+virtio-blk-ccw,iothread=iothread1,scsi=off,devno=fe.0.0001,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on
+>
+-netdev tap,fd=30,id=hostnet0,vhost=on,vhostfd=31 -device
+>
+virtio-net-ccw,netdev=hostnet0,id=net0,mac=02:3a:c8:67:95:84,devno=fe.0.0000
+>
+-netdev tap,fd=32,id=hostnet1,vhost=on,vhostfd=33 -device
+>
+virtio-net-ccw,netdev=hostnet1,id=net1,mac=52:54:00:2a:e5:08,devno=fe.0.0002
+>
+-chardev pty,id=charconsole0 -device
+>
+sclpconsole,chardev=charconsole0,id=console0 -device
+>
+virtio-balloon-ccw,id=balloon0,devno=fe.3.ffba -sandbox
+>
+on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg
+>
+timestamp=on
+>
+>
+>
+>
+2018-07-17 15:48:42.252+0000: Domain id=21 is tainted: high-privileges
+>
+2018-07-17T15:48:42.279380Z qemu-system-s390x: -chardev pty,id=charconsole0:
+>
+char device redirected to /dev/pts/3 (label charconsole0)
+>
+qemu-system-s390x: util/async.c:339: aio_get_linux_aio: Assertion
+>
+`ctx->linux_aio' failed.
+>
+2018-07-17 15:48:43.309+0000: shutting down, reason=failed
+>
+>
+>
+Any help debugging this would be greatly appreciated.
+iiuc, this possibly implies AIO was not actually used previously on this
+guest (it might have silently been falling back to threaded IO?). I
+don't have access to s390x, but would it be possible to run qemu under
+gdb and see if aio_setup_linux_aio is being called at all (I think it
+might not be, but I'm not sure why), and if so, if it's for the context
+in question?
+
+If it's not being called first, could you see what callpath is calling
+aio_get_linux_aio when this assertion trips?
+
+Thanks!
+-Nish
+
+On 07/17/2018 04:52 PM, Nishanth Aravamudan wrote:
+iiuc, this possibly implies AIO was not actually used previously on this
+guest (it might have silently been falling back to threaded IO?). I
+don't have access to s390x, but would it be possible to run qemu under
+gdb and see if aio_setup_linux_aio is being called at all (I think it
+might not be, but I'm not sure why), and if so, if it's for the context
+in question?
+
+If it's not being called first, could you see what callpath is calling
+aio_get_linux_aio when this assertion trips?
+
+Thanks!
+-Nish
+Hi Nishant,
+From the coredump of the guest this is the call trace that calls
+aio_get_linux_aio:
+Stack trace of thread 145158:
+#0  0x000003ff94dbe274 raise (libc.so.6)
+#1  0x000003ff94da39a8 abort (libc.so.6)
+#2  0x000003ff94db62ce __assert_fail_base (libc.so.6)
+#3  0x000003ff94db634c __assert_fail (libc.so.6)
+#4  0x000002aa20db067a aio_get_linux_aio (qemu-system-s390x)
+#5  0x000002aa20d229a8 raw_aio_plug (qemu-system-s390x)
+#6  0x000002aa20d309ee bdrv_io_plug (qemu-system-s390x)
+#7  0x000002aa20b5a8ea virtio_blk_handle_vq (qemu-system-s390x)
+#8  0x000002aa20db2f6e aio_dispatch_handlers (qemu-system-s390x)
+#9  0x000002aa20db3c34 aio_poll (qemu-system-s390x)
+#10 0x000002aa20be32a2 iothread_run (qemu-system-s390x)
+#11 0x000003ff94f879a8 start_thread (libpthread.so.0)
+#12 0x000003ff94e797ee thread_start (libc.so.6)
+
+
+Thanks for taking a look and responding.
+
+Thanks
+Farhan
+
+On 07/18/2018 09:42 AM, Farhan Ali wrote:
+On 07/17/2018 04:52 PM, Nishanth Aravamudan wrote:
+iiuc, this possibly implies AIO was not actually used previously on this
+guest (it might have silently been falling back to threaded IO?). I
+don't have access to s390x, but would it be possible to run qemu under
+gdb and see if aio_setup_linux_aio is being called at all (I think it
+might not be, but I'm not sure why), and if so, if it's for the context
+in question?
+
+If it's not being called first, could you see what callpath is calling
+aio_get_linux_aio when this assertion trips?
+
+Thanks!
+-Nish
+Hi Nishant,
+From the coredump of the guest this is the call trace that calls
+aio_get_linux_aio:
+Stack trace of thread 145158:
+#0  0x000003ff94dbe274 raise (libc.so.6)
+#1  0x000003ff94da39a8 abort (libc.so.6)
+#2  0x000003ff94db62ce __assert_fail_base (libc.so.6)
+#3  0x000003ff94db634c __assert_fail (libc.so.6)
+#4  0x000002aa20db067a aio_get_linux_aio (qemu-system-s390x)
+#5  0x000002aa20d229a8 raw_aio_plug (qemu-system-s390x)
+#6  0x000002aa20d309ee bdrv_io_plug (qemu-system-s390x)
+#7  0x000002aa20b5a8ea virtio_blk_handle_vq (qemu-system-s390x)
+#8  0x000002aa20db2f6e aio_dispatch_handlers (qemu-system-s390x)
+#9  0x000002aa20db3c34 aio_poll (qemu-system-s390x)
+#10 0x000002aa20be32a2 iothread_run (qemu-system-s390x)
+#11 0x000003ff94f879a8 start_thread (libpthread.so.0)
+#12 0x000003ff94e797ee thread_start (libc.so.6)
+
+
+Thanks for taking a look and responding.
+
+Thanks
+Farhan
+Trying to debug a little further, the block device in this case is a
+"host device". And looking at your commit carefully you use the
+bdrv_attach_aio_context callback to setup a Linux AioContext.
+For some reason the "host device" struct (BlockDriver bdrv_host_device
+in block/file-posix.c) does not have a bdrv_attach_aio_context defined.
+So a simple change of adding the callback to the struct solves the issue
+and the guest starts fine.
+diff --git a/block/file-posix.c b/block/file-posix.c
+index 28824aa..b8d59fb 100644
+--- a/block/file-posix.c
++++ b/block/file-posix.c
+@@ -3135,6 +3135,7 @@ static BlockDriver bdrv_host_device = {
+     .bdrv_refresh_limits = raw_refresh_limits,
+     .bdrv_io_plug = raw_aio_plug,
+     .bdrv_io_unplug = raw_aio_unplug,
++    .bdrv_attach_aio_context = raw_aio_attach_aio_context,
+
+     .bdrv_co_truncate       = raw_co_truncate,
+     .bdrv_getlength    = raw_getlength,
+I am not too familiar with block device code in QEMU, so not sure if
+this is the right fix or if there are some underlying problems.
+Thanks
+Farhan
+
+On 18.07.2018 [11:10:27 -0400], Farhan Ali wrote:
+>
+>
+>
+On 07/18/2018 09:42 AM, Farhan Ali wrote:
+>
+>
+>
+>
+>
+> On 07/17/2018 04:52 PM, Nishanth Aravamudan wrote:
+>
+> > iiuc, this possibly implies AIO was not actually used previously on this
+>
+> > guest (it might have silently been falling back to threaded IO?). I
+>
+> > don't have access to s390x, but would it be possible to run qemu under
+>
+> > gdb and see if aio_setup_linux_aio is being called at all (I think it
+>
+> > might not be, but I'm not sure why), and if so, if it's for the context
+>
+> > in question?
+>
+> >
+>
+> > If it's not being called first, could you see what callpath is calling
+>
+> > aio_get_linux_aio when this assertion trips?
+>
+> >
+>
+> > Thanks!
+>
+> > -Nish
+>
+>
+>
+>
+>
+> Hi Nishant,
+>
+>
+>
+>  From the coredump of the guest this is the call trace that calls
+>
+> aio_get_linux_aio:
+>
+>
+>
+>
+>
+> Stack trace of thread 145158:
+>
+> #0  0x000003ff94dbe274 raise (libc.so.6)
+>
+> #1  0x000003ff94da39a8 abort (libc.so.6)
+>
+> #2  0x000003ff94db62ce __assert_fail_base (libc.so.6)
+>
+> #3  0x000003ff94db634c __assert_fail (libc.so.6)
+>
+> #4  0x000002aa20db067a aio_get_linux_aio (qemu-system-s390x)
+>
+> #5  0x000002aa20d229a8 raw_aio_plug (qemu-system-s390x)
+>
+> #6  0x000002aa20d309ee bdrv_io_plug (qemu-system-s390x)
+>
+> #7  0x000002aa20b5a8ea virtio_blk_handle_vq (qemu-system-s390x)
+>
+> #8  0x000002aa20db2f6e aio_dispatch_handlers (qemu-system-s390x)
+>
+> #9  0x000002aa20db3c34 aio_poll (qemu-system-s390x)
+>
+> #10 0x000002aa20be32a2 iothread_run (qemu-system-s390x)
+>
+> #11 0x000003ff94f879a8 start_thread (libpthread.so.0)
+>
+> #12 0x000003ff94e797ee thread_start (libc.so.6)
+>
+>
+>
+>
+>
+> Thanks for taking a look and responding.
+>
+>
+>
+> Thanks
+>
+> Farhan
+>
+>
+>
+>
+>
+>
+>
+>
+Trying to debug a little further, the block device in this case is a "host
+>
+device". And looking at your commit carefully you use the
+>
+bdrv_attach_aio_context callback to setup a Linux AioContext.
+>
+>
+For some reason the "host device" struct (BlockDriver bdrv_host_device in
+>
+block/file-posix.c) does not have a bdrv_attach_aio_context defined.
+>
+So a simple change of adding the callback to the struct solves the issue and
+>
+the guest starts fine.
+>
+>
+>
+diff --git a/block/file-posix.c b/block/file-posix.c
+>
+index 28824aa..b8d59fb 100644
+>
+--- a/block/file-posix.c
+>
++++ b/block/file-posix.c
+>
+@@ -3135,6 +3135,7 @@ static BlockDriver bdrv_host_device = {
+>
+.bdrv_refresh_limits = raw_refresh_limits,
+>
+.bdrv_io_plug = raw_aio_plug,
+>
+.bdrv_io_unplug = raw_aio_unplug,
+>
++    .bdrv_attach_aio_context = raw_aio_attach_aio_context,
+>
+>
+.bdrv_co_truncate       = raw_co_truncate,
+>
+.bdrv_getlength    = raw_getlength,
+>
+>
+>
+>
+I am not too familiar with block device code in QEMU, so not sure if
+>
+this is the right fix or if there are some underlying problems.
+Oh this is quite embarassing! I only added the bdrv_attach_aio_context
+callback for the file-backed device. Your fix is definitely corect for
+host device. Let me make sure there weren't any others missed and I will
+send out a properly formatted patch. Thank you for the quick testing and
+turnaround!
+
+-Nish
+
+On 07/18/2018 08:52 PM, Nishanth Aravamudan wrote:
+>
+On 18.07.2018 [11:10:27 -0400], Farhan Ali wrote:
+>
+>
+>
+>
+>
+> On 07/18/2018 09:42 AM, Farhan Ali wrote:
+>
+>>
+>
+>>
+>
+>> On 07/17/2018 04:52 PM, Nishanth Aravamudan wrote:
+>
+>>> iiuc, this possibly implies AIO was not actually used previously on this
+>
+>>> guest (it might have silently been falling back to threaded IO?). I
+>
+>>> don't have access to s390x, but would it be possible to run qemu under
+>
+>>> gdb and see if aio_setup_linux_aio is being called at all (I think it
+>
+>>> might not be, but I'm not sure why), and if so, if it's for the context
+>
+>>> in question?
+>
+>>>
+>
+>>> If it's not being called first, could you see what callpath is calling
+>
+>>> aio_get_linux_aio when this assertion trips?
+>
+>>>
+>
+>>> Thanks!
+>
+>>> -Nish
+>
+>>
+>
+>>
+>
+>> Hi Nishant,
+>
+>>
+>
+>>  From the coredump of the guest this is the call trace that calls
+>
+>> aio_get_linux_aio:
+>
+>>
+>
+>>
+>
+>> Stack trace of thread 145158:
+>
+>> #0  0x000003ff94dbe274 raise (libc.so.6)
+>
+>> #1  0x000003ff94da39a8 abort (libc.so.6)
+>
+>> #2  0x000003ff94db62ce __assert_fail_base (libc.so.6)
+>
+>> #3  0x000003ff94db634c __assert_fail (libc.so.6)
+>
+>> #4  0x000002aa20db067a aio_get_linux_aio (qemu-system-s390x)
+>
+>> #5  0x000002aa20d229a8 raw_aio_plug (qemu-system-s390x)
+>
+>> #6  0x000002aa20d309ee bdrv_io_plug (qemu-system-s390x)
+>
+>> #7  0x000002aa20b5a8ea virtio_blk_handle_vq (qemu-system-s390x)
+>
+>> #8  0x000002aa20db2f6e aio_dispatch_handlers (qemu-system-s390x)
+>
+>> #9  0x000002aa20db3c34 aio_poll (qemu-system-s390x)
+>
+>> #10 0x000002aa20be32a2 iothread_run (qemu-system-s390x)
+>
+>> #11 0x000003ff94f879a8 start_thread (libpthread.so.0)
+>
+>> #12 0x000003ff94e797ee thread_start (libc.so.6)
+>
+>>
+>
+>>
+>
+>> Thanks for taking a look and responding.
+>
+>>
+>
+>> Thanks
+>
+>> Farhan
+>
+>>
+>
+>>
+>
+>>
+>
+>
+>
+> Trying to debug a little further, the block device in this case is a "host
+>
+> device". And looking at your commit carefully you use the
+>
+> bdrv_attach_aio_context callback to setup a Linux AioContext.
+>
+>
+>
+> For some reason the "host device" struct (BlockDriver bdrv_host_device in
+>
+> block/file-posix.c) does not have a bdrv_attach_aio_context defined.
+>
+> So a simple change of adding the callback to the struct solves the issue and
+>
+> the guest starts fine.
+>
+>
+>
+>
+>
+> diff --git a/block/file-posix.c b/block/file-posix.c
+>
+> index 28824aa..b8d59fb 100644
+>
+> --- a/block/file-posix.c
+>
+> +++ b/block/file-posix.c
+>
+> @@ -3135,6 +3135,7 @@ static BlockDriver bdrv_host_device = {
+>
+>      .bdrv_refresh_limits = raw_refresh_limits,
+>
+>      .bdrv_io_plug = raw_aio_plug,
+>
+>      .bdrv_io_unplug = raw_aio_unplug,
+>
+> +    .bdrv_attach_aio_context = raw_aio_attach_aio_context,
+>
+>
+>
+>      .bdrv_co_truncate       = raw_co_truncate,
+>
+>      .bdrv_getlength    = raw_getlength,
+>
+>
+>
+>
+>
+>
+>
+> I am not too familiar with block device code in QEMU, so not sure if
+>
+> this is the right fix or if there are some underlying problems.
+>
+>
+Oh this is quite embarassing! I only added the bdrv_attach_aio_context
+>
+callback for the file-backed device. Your fix is definitely corect for
+>
+host device. Let me make sure there weren't any others missed and I will
+>
+send out a properly formatted patch. Thank you for the quick testing and
+>
+turnaround!
+Farhan, can you respin your patch with proper sign-off and patch description?
+Adding qemu-block.
+
+Hi Christian,
+
+On 19.07.2018 [08:55:20 +0200], Christian Borntraeger wrote:
+>
+>
+>
+On 07/18/2018 08:52 PM, Nishanth Aravamudan wrote:
+>
+> On 18.07.2018 [11:10:27 -0400], Farhan Ali wrote:
+>
+>>
+>
+>>
+>
+>> On 07/18/2018 09:42 AM, Farhan Ali wrote:
+<snip>
+
+>
+>> I am not too familiar with block device code in QEMU, so not sure if
+>
+>> this is the right fix or if there are some underlying problems.
+>
+>
+>
+> Oh this is quite embarassing! I only added the bdrv_attach_aio_context
+>
+> callback for the file-backed device. Your fix is definitely corect for
+>
+> host device. Let me make sure there weren't any others missed and I will
+>
+> send out a properly formatted patch. Thank you for the quick testing and
+>
+> turnaround!
+>
+>
+Farhan, can you respin your patch with proper sign-off and patch description?
+>
+Adding qemu-block.
+I sent it yesterday, sorry I didn't cc everyone from this e-mail:
+http://lists.nongnu.org/archive/html/qemu-block/2018-07/msg00516.html
+Thanks,
+Nish
+
diff --git a/results/classifier/009/other/25842545 b/results/classifier/009/other/25842545
new file mode 100644
index 000000000..103fe3722
--- /dev/null
+++ b/results/classifier/009/other/25842545
@@ -0,0 +1,212 @@
+other: 0.912
+KVM: 0.867
+vnc: 0.862
+device: 0.847
+debug: 0.836
+performance: 0.831
+semantic: 0.829
+PID: 0.829
+boot: 0.824
+graphic: 0.822
+permissions: 0.817
+socket: 0.808
+files: 0.806
+network: 0.796
+
+[Qemu-devel] [Bug?] Guest pause because VMPTRLD failed in KVM
+
+Hello,
+
+  We encountered a problem that a guest paused because the KMOD report VMPTRLD 
+failed.
+
+The related information is as follows:
+
+1) Qemu command:
+   /usr/bin/qemu-kvm -name omu1 -S -machine pc-i440fx-2.3,accel=kvm,usb=off -cpu
+host -m 15625 -realtime mlock=off -smp 8,sockets=1,cores=8,threads=1 -uuid
+a2aacfff-6583-48b4-b6a4-e6830e519931 -no-user-config -nodefaults -chardev
+socket,id=charmonitor,path=/var/lib/libvirt/qemu/omu1.monitor,server,nowait
+-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
+-boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
+virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive
+file=/home/env/guest1.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,aio=native
+  -device
+virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0
+  -drive
+file=/home/env/guest_300G.img,if=none,id=drive-virtio-disk1,format=raw,cache=none,aio=native
+  -device
+virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk1,id=virtio-disk1
+  -netdev tap,fd=25,id=hostnet0,vhost=on,vhostfd=26 -device
+virtio-net-pci,netdev=hostnet0,id=net0,mac=00:00:80:05:00:00,bus=pci.0,addr=0x3
+-netdev tap,fd=27,id=hostnet1,vhost=on,vhostfd=28 -device
+virtio-net-pci,netdev=hostnet1,id=net1,mac=00:00:80:05:00:01,bus=pci.0,addr=0x4
+-chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0
+-device usb-tablet,id=input0 -vnc 0.0.0.0:0 -device
+cirrus-vga,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2 -device
+virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x8 -msg timestamp=on
+
+   2) Qemu log:
+   KVM: entry failed, hardware error 0x4
+   RAX=00000000ffffffed RBX=ffff8803fa2d7fd8 RCX=0100000000000000
+RDX=0000000000000000
+   RSI=0000000000000000 RDI=0000000000000046 RBP=ffff8803fa2d7e90
+RSP=ffff8803fa2efe90
+   R8 =0000000000000000 R9 =0000000000000000 R10=0000000000000000
+R11=000000000000b69a
+   R12=0000000000000001 R13=ffffffff81a25b40 R14=0000000000000000
+R15=ffff8803fa2d7fd8
+   RIP=ffffffff81053e16 RFL=00000286 [--S--P-] CPL=0 II=0 A20=1 SMM=0 HLT=0
+   ES =0000 0000000000000000 ffffffff 00c00000
+   CS =0010 0000000000000000 ffffffff 00a09b00 DPL=0 CS64 [-RA]
+   SS =0018 0000000000000000 ffffffff 00c09300 DPL=0 DS   [-WA]
+   DS =0000 0000000000000000 ffffffff 00c00000
+   FS =0000 0000000000000000 ffffffff 00c00000
+   GS =0000 ffff88040f540000 ffffffff 00c00000
+   LDT=0000 0000000000000000 ffffffff 00c00000
+   TR =0040 ffff88040f550a40 00002087 00008b00 DPL=0 TSS64-busy
+   GDT=     ffff88040f549000 0000007f
+   IDT=     ffffffffff529000 00000fff
+   CR0=80050033 CR2=00007f81ca0c5000 CR3=00000003f5081000 CR4=000407e0
+   DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000
+DR3=0000000000000000
+   DR6=00000000ffff0ff0 DR7=0000000000000400
+   EFER=0000000000000d01
+   Code=?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? <??> ?? ??
+?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ?? ??
+
+   3) Demsg
+   [347315.028339] kvm: vmptrld ffff8817ec5f0000/17ec5f0000 failed
+   klogd 1.4.1, ---------- state change ----------
+   [347315.039506] kvm: vmptrld ffff8817ec5f0000/17ec5f0000 failed
+   [347315.051728] kvm: vmptrld ffff8817ec5f0000/17ec5f0000 failed
+   [347315.057472] vmwrite error: reg 6c0a value ffff88307e66e480 (err
+2120672384)
+   [347315.064567] Pid: 69523, comm: qemu-kvm Tainted: GF           X
+3.0.93-0.8-default #1
+   [347315.064569] Call Trace:
+   [347315.064587]  [<ffffffff810049d5>] dump_trace+0x75/0x300
+   [347315.064595]  [<ffffffff8145e3e3>] dump_stack+0x69/0x6f
+   [347315.064617]  [<ffffffffa03738de>] vmx_vcpu_load+0x11e/0x1d0 [kvm_intel]
+   [347315.064647]  [<ffffffffa029a204>] kvm_arch_vcpu_load+0x44/0x1d0 [kvm]
+   [347315.064669]  [<ffffffff81054ee1>] finish_task_switch+0x81/0xe0
+   [347315.064676]  [<ffffffff8145f0b4>] thread_return+0x3b/0x2a7
+   [347315.064687]  [<ffffffffa028d9b5>] kvm_vcpu_block+0x65/0xa0 [kvm]
+   [347315.064703]  [<ffffffffa02a16d1>] __vcpu_run+0xd1/0x260 [kvm]
+   [347315.064732]  [<ffffffffa02a2418>] kvm_arch_vcpu_ioctl_run+0x68/0x1a0 
+[kvm]
+   [347315.064759]  [<ffffffffa028ecee>] kvm_vcpu_ioctl+0x38e/0x580 [kvm]
+   [347315.064771]  [<ffffffff8116bdfb>] do_vfs_ioctl+0x8b/0x3b0
+   [347315.064776]  [<ffffffff8116c1c1>] sys_ioctl+0xa1/0xb0
+   [347315.064783]  [<ffffffff81469272>] system_call_fastpath+0x16/0x1b
+   [347315.064797]  [<00007fee51969ce7>] 0x7fee51969ce6
+   [347315.064799] vmwrite error: reg 6c0c value ffff88307e664000 (err
+2120630272)
+   [347315.064802] Pid: 69523, comm: qemu-kvm Tainted: GF           X
+3.0.93-0.8-default #1
+   [347315.064803] Call Trace:
+   [347315.064807]  [<ffffffff810049d5>] dump_trace+0x75/0x300
+   [347315.064811]  [<ffffffff8145e3e3>] dump_stack+0x69/0x6f
+   [347315.064817]  [<ffffffffa03738ec>] vmx_vcpu_load+0x12c/0x1d0 [kvm_intel]
+   [347315.064832]  [<ffffffffa029a204>] kvm_arch_vcpu_load+0x44/0x1d0 [kvm]
+   [347315.064851]  [<ffffffff81054ee1>] finish_task_switch+0x81/0xe0
+   [347315.064855]  [<ffffffff8145f0b4>] thread_return+0x3b/0x2a7
+   [347315.064865]  [<ffffffffa028d9b5>] kvm_vcpu_block+0x65/0xa0 [kvm]
+   [347315.064880]  [<ffffffffa02a16d1>] __vcpu_run+0xd1/0x260 [kvm]
+   [347315.064907]  [<ffffffffa02a2418>] kvm_arch_vcpu_ioctl_run+0x68/0x1a0 
+[kvm]
+   [347315.064933]  [<ffffffffa028ecee>] kvm_vcpu_ioctl+0x38e/0x580 [kvm]
+   [347315.064943]  [<ffffffff8116bdfb>] do_vfs_ioctl+0x8b/0x3b0
+   [347315.064947]  [<ffffffff8116c1c1>] sys_ioctl+0xa1/0xb0
+   [347315.064951]  [<ffffffff81469272>] system_call_fastpath+0x16/0x1b
+   [347315.064957]  [<00007fee51969ce7>] 0x7fee51969ce6
+   [347315.064959] vmwrite error: reg 6c10 value 0 (err 0)
+
+   4) The isssue can't be reporduced. I search the Intel VMX sepc about reaseons
+of vmptrld failure:
+   The instruction fails if its operand is not properly aligned, sets
+unsupported physical-address bits, or is equal to the VMXON
+   pointer. In addition, the instruction fails if the 32 bits in memory
+referenced by the operand do not match the VMCS
+   revision identifier supported by this processor.
+
+   But I can't find any cues from the KVM source code. It seems each
+   error conditions is impossible in theory. :(
+
+Any suggestions will be appreciated! Paolo?
+
+-- 
+Regards,
+-Gonglei
+
+On 10/11/2016 15:10, gong lei wrote:
+>
+4) The isssue can't be reporduced. I search the Intel VMX sepc about
+>
+reaseons
+>
+of vmptrld failure:
+>
+The instruction fails if its operand is not properly aligned, sets
+>
+unsupported physical-address bits, or is equal to the VMXON
+>
+pointer. In addition, the instruction fails if the 32 bits in memory
+>
+referenced by the operand do not match the VMCS
+>
+revision identifier supported by this processor.
+>
+>
+But I can't find any cues from the KVM source code. It seems each
+>
+error conditions is impossible in theory. :(
+Yes, it should not happen. :(
+
+If it's not reproducible, it's really hard to say what it was, except a
+random memory corruption elsewhere or even a bit flip (!).
+
+Paolo
+
+On 2016/11/17 20:39, Paolo Bonzini wrote:
+>
+>
+On 10/11/2016 15:10, gong lei wrote:
+>
+>     4) The isssue can't be reporduced. I search the Intel VMX sepc about
+>
+> reaseons
+>
+> of vmptrld failure:
+>
+>     The instruction fails if its operand is not properly aligned, sets
+>
+> unsupported physical-address bits, or is equal to the VMXON
+>
+>     pointer. In addition, the instruction fails if the 32 bits in memory
+>
+> referenced by the operand do not match the VMCS
+>
+>     revision identifier supported by this processor.
+>
+>
+>
+>     But I can't find any cues from the KVM source code. It seems each
+>
+>     error conditions is impossible in theory. :(
+>
+Yes, it should not happen. :(
+>
+>
+If it's not reproducible, it's really hard to say what it was, except a
+>
+random memory corruption elsewhere or even a bit flip (!).
+>
+>
+Paolo
+Thanks for your reply, Paolo :)
+
+-- 
+Regards,
+-Gonglei
+
diff --git a/results/classifier/009/other/25892827 b/results/classifier/009/other/25892827
new file mode 100644
index 000000000..bccf4d818
--- /dev/null
+++ b/results/classifier/009/other/25892827
@@ -0,0 +1,1087 @@
+other: 0.892
+permissions: 0.881
+KVM: 0.872
+debug: 0.868
+vnc: 0.846
+boot: 0.839
+network: 0.839
+device: 0.839
+graphic: 0.832
+semantic: 0.825
+socket: 0.822
+performance: 0.819
+files: 0.804
+PID: 0.792
+
+[Qemu-devel] [BUG/RFC] Two cpus are not brought up normally in SLES11 sp3 VM after reboot
+
+Hi,
+
+Recently we encountered a problem in our project: 2 CPUs in VM are not brought 
+up normally after reboot.
+
+Our host is using KVM kmod 3.6 and QEMU 2.1.
+A SLES 11 sp3 VM configured with 8 vcpus,
+cpu model is configured with 'host-passthrough'.
+
+After VM's first time started up, everything seems to be OK.
+and then VM is paniced and rebooted.
+After reboot, only 6 cpus are brought up in VM, cpu1 and cpu7 are not online.
+
+This is the only message we can get from VM:
+VM dmesg shows:
+[    0.069867] Booting Node   0, Processors  #1
+[    5.060042] CPU1: Stuck ??
+[    5.060499]  #2
+[    5.088322] kvm-clock: cpu 2, msr 6:3fc90901, secondary cpu clock
+[    5.088335] KVM setup async PF for cpu 2
+[    5.092967] NMI watchdog enabled, takes one hw-pmu counter.
+[    5.094405]  #3
+[    5.108324] kvm-clock: cpu 3, msr 6:3fcd0901, secondary cpu clock
+[    5.108333] KVM setup async PF for cpu 3
+[    5.113553] NMI watchdog enabled, takes one hw-pmu counter.
+[    5.114970]  #4
+[    5.128325] kvm-clock: cpu 4, msr 6:3fd10901, secondary cpu clock
+[    5.128336] KVM setup async PF for cpu 4
+[    5.134576] NMI watchdog enabled, takes one hw-pmu counter.
+[    5.135998]  #5
+[    5.152324] kvm-clock: cpu 5, msr 6:3fd50901, secondary cpu clock
+[    5.152334] KVM setup async PF for cpu 5
+[    5.154764] NMI watchdog enabled, takes one hw-pmu counter.
+[    5.156467]  #6
+[    5.172327] kvm-clock: cpu 6, msr 6:3fd90901, secondary cpu clock
+[    5.172341] KVM setup async PF for cpu 6
+[    5.180738] NMI watchdog enabled, takes one hw-pmu counter.
+[    5.182173]  #7 Ok.
+[   10.170815] CPU7: Stuck ??
+[   10.171648] Brought up 6 CPUs
+[   10.172394] Total of 6 processors activated (28799.97 BogoMIPS).
+
+From host, we found that QEMU vcpu1 thread and vcpu7 thread were not consuming 
+any cpu (Should be in idle state),
+All of VCPUs' stacks in host is like bellow:
+
+[<ffffffffa07089b5>] kvm_vcpu_block+0x65/0xa0 [kvm]
+[<ffffffffa071c7c1>] __vcpu_run+0xd1/0x260 [kvm]
+[<ffffffffa071d508>] kvm_arch_vcpu_ioctl_run+0x68/0x1a0 [kvm]
+[<ffffffffa0709cee>] kvm_vcpu_ioctl+0x38e/0x580 [kvm]
+[<ffffffff8116be8b>] do_vfs_ioctl+0x8b/0x3b0
+[<ffffffff8116c251>] sys_ioctl+0xa1/0xb0
+[<ffffffff81468092>] system_call_fastpath+0x16/0x1b
+[<00002ab9fe1f99a7>] 0x2ab9fe1f99a7
+[<ffffffffffffffff>] 0xffffffffffffffff
+
+We looked into the kernel codes that could leading to the above 'Stuck' warning,
+and found that the only possible is the emulation of 'cpuid' instruct in 
+kvm/qemu has something wrong.
+But since we can’t reproduce this problem, we are not quite sure.
+Is there any possible that the cupid emulation in kvm/qemu has some bug ?
+
+Has anyone come across these problem before? Or any idea?
+
+Thanks,
+zhanghailiang
+
+On 06/07/2015 09:54, zhanghailiang wrote:
+>
+>
+From host, we found that QEMU vcpu1 thread and vcpu7 thread were not
+>
+consuming any cpu (Should be in idle state),
+>
+All of VCPUs' stacks in host is like bellow:
+>
+>
+[<ffffffffa07089b5>] kvm_vcpu_block+0x65/0xa0 [kvm]
+>
+[<ffffffffa071c7c1>] __vcpu_run+0xd1/0x260 [kvm]
+>
+[<ffffffffa071d508>] kvm_arch_vcpu_ioctl_run+0x68/0x1a0 [kvm]
+>
+[<ffffffffa0709cee>] kvm_vcpu_ioctl+0x38e/0x580 [kvm]
+>
+[<ffffffff8116be8b>] do_vfs_ioctl+0x8b/0x3b0
+>
+[<ffffffff8116c251>] sys_ioctl+0xa1/0xb0
+>
+[<ffffffff81468092>] system_call_fastpath+0x16/0x1b
+>
+[<00002ab9fe1f99a7>] 0x2ab9fe1f99a7
+>
+[<ffffffffffffffff>] 0xffffffffffffffff
+>
+>
+We looked into the kernel codes that could leading to the above 'Stuck'
+>
+warning,
+>
+and found that the only possible is the emulation of 'cpuid' instruct in
+>
+kvm/qemu has something wrong.
+>
+But since we can’t reproduce this problem, we are not quite sure.
+>
+Is there any possible that the cupid emulation in kvm/qemu has some bug ?
+Can you explain the relationship to the cpuid emulation?  What do the
+traces say about vcpus 1 and 7?
+
+Paolo
+
+On 2015/7/6 16:45, Paolo Bonzini wrote:
+On 06/07/2015 09:54, zhanghailiang wrote:
+From host, we found that QEMU vcpu1 thread and vcpu7 thread were not
+consuming any cpu (Should be in idle state),
+All of VCPUs' stacks in host is like bellow:
+
+[<ffffffffa07089b5>] kvm_vcpu_block+0x65/0xa0 [kvm]
+[<ffffffffa071c7c1>] __vcpu_run+0xd1/0x260 [kvm]
+[<ffffffffa071d508>] kvm_arch_vcpu_ioctl_run+0x68/0x1a0 [kvm]
+[<ffffffffa0709cee>] kvm_vcpu_ioctl+0x38e/0x580 [kvm]
+[<ffffffff8116be8b>] do_vfs_ioctl+0x8b/0x3b0
+[<ffffffff8116c251>] sys_ioctl+0xa1/0xb0
+[<ffffffff81468092>] system_call_fastpath+0x16/0x1b
+[<00002ab9fe1f99a7>] 0x2ab9fe1f99a7
+[<ffffffffffffffff>] 0xffffffffffffffff
+
+We looked into the kernel codes that could leading to the above 'Stuck'
+warning,
+and found that the only possible is the emulation of 'cpuid' instruct in
+kvm/qemu has something wrong.
+But since we can’t reproduce this problem, we are not quite sure.
+Is there any possible that the cupid emulation in kvm/qemu has some bug ?
+Can you explain the relationship to the cpuid emulation?  What do the
+traces say about vcpus 1 and 7?
+OK, we searched the VM's kernel codes with the 'Stuck' message, and  it is 
+located in
+do_boot_cpu(). It's in BSP context, the call process is:
+BSP executes start_kernel() -> smp_init() -> smp_boot_cpus() -> do_boot_cpu() 
+-> wakeup_secondary_via_INIT() to trigger APs.
+It will wait 5s for APs to startup, if some AP not startup normally, it will 
+print 'CPU%d Stuck' or 'CPU%d: Not responding'.
+
+If it prints 'Stuck', it means the AP has received the SIPI interrupt and 
+begins to execute the code
+'ENTRY(trampoline_data)' (trampoline_64.S) , but be stuck in some places before 
+smp_callin()(smpboot.c).
+The follow is the starup process of BSP and AP.
+BSP:
+start_kernel()
+  ->smp_init()
+     ->smp_boot_cpus()
+       ->do_boot_cpu()
+           ->start_ip = trampoline_address(); //set the address that AP will go 
+to execute
+           ->wakeup_secondary_cpu_via_init(); // kick the secondary CPU
+           ->for (timeout = 0; timeout < 50000; timeout++)
+               if (cpumask_test_cpu(cpu, cpu_callin_mask)) break;// check if AP 
+startup or not
+
+APs:
+ENTRY(trampoline_data) (trampoline_64.S)
+      ->ENTRY(secondary_startup_64) (head_64.S)
+         ->start_secondary() (smpboot.c)
+            ->cpu_init();
+            ->smp_callin();
+                ->cpumask_set_cpu(cpuid, cpu_callin_mask); ->Note: if AP comes 
+here, the BSP will not prints the error message.
+
+From above call process, we can be sure that, the AP has been stuck between 
+trampoline_data and the cpumask_set_cpu() in
+smp_callin(), we look through these codes path carefully, and only found a 
+'hlt' instruct that could block the process.
+It is located in trampoline_data():
+
+ENTRY(trampoline_data)
+        ...
+
+        call    verify_cpu              # Verify the cpu supports long mode
+        testl   %eax, %eax              # Check for return code
+        jnz     no_longmode
+
+        ...
+
+no_longmode:
+        hlt
+        jmp no_longmode
+
+For the process verify_cpu(),
+we can only find the 'cpuid' sensitive instruct that could lead VM exit from 
+No-root mode.
+This is why we doubt if cpuid emulation is wrong in KVM/QEMU that leading to 
+the fail in verify_cpu.
+
+From the message in VM, we know vcpu1 and vcpu7 is something wrong.
+[    5.060042] CPU1: Stuck ??
+[   10.170815] CPU7: Stuck ??
+[   10.171648] Brought up 6 CPUs
+
+Besides, the follow is the cpus message got from host.
+80FF72F5-FF6D-E411-A8C8-000000821800:/home/fsp/hrg # virsh qemu-monitor-command 
+instance-0000000
+* CPU #0: pc=0x00007f64160c683d thread_id=68570
+  CPU #1: pc=0xffffffff810301f1 (halted) thread_id=68573
+  CPU #2: pc=0xffffffff810301e2 (halted) thread_id=68575
+  CPU #3: pc=0xffffffff810301e2 (halted) thread_id=68576
+  CPU #4: pc=0xffffffff810301e2 (halted) thread_id=68577
+  CPU #5: pc=0xffffffff810301e2 (halted) thread_id=68578
+  CPU #6: pc=0xffffffff810301e2 (halted) thread_id=68583
+  CPU #7: pc=0xffffffff810301f1 (halted) thread_id=68584
+
+Oh, i also forgot to mention in the above message that, we have bond each vCPU 
+to different physical CPU in
+host.
+
+Thanks,
+zhanghailiang
+
+On 06/07/2015 11:59, zhanghailiang wrote:
+>
+>
+>
+Besides, the follow is the cpus message got from host.
+>
+80FF72F5-FF6D-E411-A8C8-000000821800:/home/fsp/hrg # virsh
+>
+qemu-monitor-command instance-0000000
+>
+* CPU #0: pc=0x00007f64160c683d thread_id=68570
+>
+CPU #1: pc=0xffffffff810301f1 (halted) thread_id=68573
+>
+CPU #2: pc=0xffffffff810301e2 (halted) thread_id=68575
+>
+CPU #3: pc=0xffffffff810301e2 (halted) thread_id=68576
+>
+CPU #4: pc=0xffffffff810301e2 (halted) thread_id=68577
+>
+CPU #5: pc=0xffffffff810301e2 (halted) thread_id=68578
+>
+CPU #6: pc=0xffffffff810301e2 (halted) thread_id=68583
+>
+CPU #7: pc=0xffffffff810301f1 (halted) thread_id=68584
+>
+>
+Oh, i also forgot to mention in the above message that, we have bond
+>
+each vCPU to different physical CPU in
+>
+host.
+Can you capture a trace on the host (trace-cmd record -e kvm) and send
+it privately?  Please note which CPUs get stuck, since I guess it's not
+always 1 and 7.
+
+Paolo
+
+On Mon, 6 Jul 2015 17:59:10 +0800
+zhanghailiang <address@hidden> wrote:
+
+>
+On 2015/7/6 16:45, Paolo Bonzini wrote:
+>
+>
+>
+>
+>
+> On 06/07/2015 09:54, zhanghailiang wrote:
+>
+>>
+>
+>>  From host, we found that QEMU vcpu1 thread and vcpu7 thread were not
+>
+>> consuming any cpu (Should be in idle state),
+>
+>> All of VCPUs' stacks in host is like bellow:
+>
+>>
+>
+>> [<ffffffffa07089b5>] kvm_vcpu_block+0x65/0xa0 [kvm]
+>
+>> [<ffffffffa071c7c1>] __vcpu_run+0xd1/0x260 [kvm]
+>
+>> [<ffffffffa071d508>] kvm_arch_vcpu_ioctl_run+0x68/0x1a0 [kvm]
+>
+>> [<ffffffffa0709cee>] kvm_vcpu_ioctl+0x38e/0x580 [kvm]
+>
+>> [<ffffffff8116be8b>] do_vfs_ioctl+0x8b/0x3b0
+>
+>> [<ffffffff8116c251>] sys_ioctl+0xa1/0xb0
+>
+>> [<ffffffff81468092>] system_call_fastpath+0x16/0x1b
+>
+>> [<00002ab9fe1f99a7>] 0x2ab9fe1f99a7
+>
+>> [<ffffffffffffffff>] 0xffffffffffffffff
+>
+>>
+>
+>> We looked into the kernel codes that could leading to the above 'Stuck'
+>
+>> warning,
+in current upstream there isn't any printk(...Stuck...) left since that code 
+path
+has been reworked.
+I've often seen this on over-committed host during guest CPUs up/down torture 
+test.
+Could you update guest kernel to upstream and see if issue reproduces?
+
+>
+>> and found that the only possible is the emulation of 'cpuid' instruct in
+>
+>> kvm/qemu has something wrong.
+>
+>> But since we can’t reproduce this problem, we are not quite sure.
+>
+>> Is there any possible that the cupid emulation in kvm/qemu has some bug ?
+>
+>
+>
+> Can you explain the relationship to the cpuid emulation?  What do the
+>
+> traces say about vcpus 1 and 7?
+>
+>
+OK, we searched the VM's kernel codes with the 'Stuck' message, and  it is
+>
+located in
+>
+do_boot_cpu(). It's in BSP context, the call process is:
+>
+BSP executes start_kernel() -> smp_init() -> smp_boot_cpus() -> do_boot_cpu()
+>
+-> wakeup_secondary_via_INIT() to trigger APs.
+>
+It will wait 5s for APs to startup, if some AP not startup normally, it will
+>
+print 'CPU%d Stuck' or 'CPU%d: Not responding'.
+>
+>
+If it prints 'Stuck', it means the AP has received the SIPI interrupt and
+>
+begins to execute the code
+>
+'ENTRY(trampoline_data)' (trampoline_64.S) , but be stuck in some places
+>
+before smp_callin()(smpboot.c).
+>
+The follow is the starup process of BSP and AP.
+>
+BSP:
+>
+start_kernel()
+>
+->smp_init()
+>
+->smp_boot_cpus()
+>
+->do_boot_cpu()
+>
+->start_ip = trampoline_address(); //set the address that AP will
+>
+go to execute
+>
+->wakeup_secondary_cpu_via_init(); // kick the secondary CPU
+>
+->for (timeout = 0; timeout < 50000; timeout++)
+>
+if (cpumask_test_cpu(cpu, cpu_callin_mask)) break;// check if
+>
+AP startup or not
+>
+>
+APs:
+>
+ENTRY(trampoline_data) (trampoline_64.S)
+>
+->ENTRY(secondary_startup_64) (head_64.S)
+>
+->start_secondary() (smpboot.c)
+>
+->cpu_init();
+>
+->smp_callin();
+>
+->cpumask_set_cpu(cpuid, cpu_callin_mask); ->Note: if AP
+>
+comes here, the BSP will not prints the error message.
+>
+>
+From above call process, we can be sure that, the AP has been stuck between
+>
+trampoline_data and the cpumask_set_cpu() in
+>
+smp_callin(), we look through these codes path carefully, and only found a
+>
+'hlt' instruct that could block the process.
+>
+It is located in trampoline_data():
+>
+>
+ENTRY(trampoline_data)
+>
+...
+>
+>
+call    verify_cpu              # Verify the cpu supports long mode
+>
+testl   %eax, %eax              # Check for return code
+>
+jnz     no_longmode
+>
+>
+...
+>
+>
+no_longmode:
+>
+hlt
+>
+jmp no_longmode
+>
+>
+For the process verify_cpu(),
+>
+we can only find the 'cpuid' sensitive instruct that could lead VM exit from
+>
+No-root mode.
+>
+This is why we doubt if cpuid emulation is wrong in KVM/QEMU that leading to
+>
+the fail in verify_cpu.
+>
+>
+From the message in VM, we know vcpu1 and vcpu7 is something wrong.
+>
+[    5.060042] CPU1: Stuck ??
+>
+[   10.170815] CPU7: Stuck ??
+>
+[   10.171648] Brought up 6 CPUs
+>
+>
+Besides, the follow is the cpus message got from host.
+>
+80FF72F5-FF6D-E411-A8C8-000000821800:/home/fsp/hrg # virsh
+>
+qemu-monitor-command instance-0000000
+>
+* CPU #0: pc=0x00007f64160c683d thread_id=68570
+>
+CPU #1: pc=0xffffffff810301f1 (halted) thread_id=68573
+>
+CPU #2: pc=0xffffffff810301e2 (halted) thread_id=68575
+>
+CPU #3: pc=0xffffffff810301e2 (halted) thread_id=68576
+>
+CPU #4: pc=0xffffffff810301e2 (halted) thread_id=68577
+>
+CPU #5: pc=0xffffffff810301e2 (halted) thread_id=68578
+>
+CPU #6: pc=0xffffffff810301e2 (halted) thread_id=68583
+>
+CPU #7: pc=0xffffffff810301f1 (halted) thread_id=68584
+>
+>
+Oh, i also forgot to mention in the above message that, we have bond each
+>
+vCPU to different physical CPU in
+>
+host.
+>
+>
+Thanks,
+>
+zhanghailiang
+>
+>
+>
+>
+>
+--
+>
+To unsubscribe from this list: send the line "unsubscribe kvm" in
+>
+the body of a message to address@hidden
+>
+More majordomo info at
+http://vger.kernel.org/majordomo-info.html
+
+On 2015/7/7 19:23, Igor Mammedov wrote:
+On Mon, 6 Jul 2015 17:59:10 +0800
+zhanghailiang <address@hidden> wrote:
+On 2015/7/6 16:45, Paolo Bonzini wrote:
+On 06/07/2015 09:54, zhanghailiang wrote:
+From host, we found that QEMU vcpu1 thread and vcpu7 thread were not
+consuming any cpu (Should be in idle state),
+All of VCPUs' stacks in host is like bellow:
+
+[<ffffffffa07089b5>] kvm_vcpu_block+0x65/0xa0 [kvm]
+[<ffffffffa071c7c1>] __vcpu_run+0xd1/0x260 [kvm]
+[<ffffffffa071d508>] kvm_arch_vcpu_ioctl_run+0x68/0x1a0 [kvm]
+[<ffffffffa0709cee>] kvm_vcpu_ioctl+0x38e/0x580 [kvm]
+[<ffffffff8116be8b>] do_vfs_ioctl+0x8b/0x3b0
+[<ffffffff8116c251>] sys_ioctl+0xa1/0xb0
+[<ffffffff81468092>] system_call_fastpath+0x16/0x1b
+[<00002ab9fe1f99a7>] 0x2ab9fe1f99a7
+[<ffffffffffffffff>] 0xffffffffffffffff
+
+We looked into the kernel codes that could leading to the above 'Stuck'
+warning,
+in current upstream there isn't any printk(...Stuck...) left since that code 
+path
+has been reworked.
+I've often seen this on over-committed host during guest CPUs up/down torture 
+test.
+Could you update guest kernel to upstream and see if issue reproduces?
+Hmm, Unfortunately, it is very hard to reproduce, and we are still trying to 
+reproduce it.
+
+For your test case, is it a kernel bug?
+Or is there any related patch could solve your test problem been merged into
+upstream ?
+
+Thanks,
+zhanghailiang
+and found that the only possible is the emulation of 'cpuid' instruct in
+kvm/qemu has something wrong.
+But since we can’t reproduce this problem, we are not quite sure.
+Is there any possible that the cupid emulation in kvm/qemu has some bug ?
+Can you explain the relationship to the cpuid emulation?  What do the
+traces say about vcpus 1 and 7?
+OK, we searched the VM's kernel codes with the 'Stuck' message, and  it is 
+located in
+do_boot_cpu(). It's in BSP context, the call process is:
+BSP executes start_kernel() -> smp_init() -> smp_boot_cpus() -> do_boot_cpu() 
+-> wakeup_secondary_via_INIT() to trigger APs.
+It will wait 5s for APs to startup, if some AP not startup normally, it will 
+print 'CPU%d Stuck' or 'CPU%d: Not responding'.
+
+If it prints 'Stuck', it means the AP has received the SIPI interrupt and 
+begins to execute the code
+'ENTRY(trampoline_data)' (trampoline_64.S) , but be stuck in some places before 
+smp_callin()(smpboot.c).
+The follow is the starup process of BSP and AP.
+BSP:
+start_kernel()
+    ->smp_init()
+       ->smp_boot_cpus()
+         ->do_boot_cpu()
+             ->start_ip = trampoline_address(); //set the address that AP will 
+go to execute
+             ->wakeup_secondary_cpu_via_init(); // kick the secondary CPU
+             ->for (timeout = 0; timeout < 50000; timeout++)
+                 if (cpumask_test_cpu(cpu, cpu_callin_mask)) break;// check if 
+AP startup or not
+
+APs:
+ENTRY(trampoline_data) (trampoline_64.S)
+        ->ENTRY(secondary_startup_64) (head_64.S)
+           ->start_secondary() (smpboot.c)
+              ->cpu_init();
+              ->smp_callin();
+                  ->cpumask_set_cpu(cpuid, cpu_callin_mask); ->Note: if AP 
+comes here, the BSP will not prints the error message.
+
+  From above call process, we can be sure that, the AP has been stuck between 
+trampoline_data and the cpumask_set_cpu() in
+smp_callin(), we look through these codes path carefully, and only found a 
+'hlt' instruct that could block the process.
+It is located in trampoline_data():
+
+ENTRY(trampoline_data)
+          ...
+
+        call    verify_cpu              # Verify the cpu supports long mode
+        testl   %eax, %eax              # Check for return code
+        jnz     no_longmode
+
+          ...
+
+no_longmode:
+        hlt
+        jmp no_longmode
+
+For the process verify_cpu(),
+we can only find the 'cpuid' sensitive instruct that could lead VM exit from 
+No-root mode.
+This is why we doubt if cpuid emulation is wrong in KVM/QEMU that leading to 
+the fail in verify_cpu.
+
+  From the message in VM, we know vcpu1 and vcpu7 is something wrong.
+[    5.060042] CPU1: Stuck ??
+[   10.170815] CPU7: Stuck ??
+[   10.171648] Brought up 6 CPUs
+
+Besides, the follow is the cpus message got from host.
+80FF72F5-FF6D-E411-A8C8-000000821800:/home/fsp/hrg # virsh qemu-monitor-command 
+instance-0000000
+* CPU #0: pc=0x00007f64160c683d thread_id=68570
+    CPU #1: pc=0xffffffff810301f1 (halted) thread_id=68573
+    CPU #2: pc=0xffffffff810301e2 (halted) thread_id=68575
+    CPU #3: pc=0xffffffff810301e2 (halted) thread_id=68576
+    CPU #4: pc=0xffffffff810301e2 (halted) thread_id=68577
+    CPU #5: pc=0xffffffff810301e2 (halted) thread_id=68578
+    CPU #6: pc=0xffffffff810301e2 (halted) thread_id=68583
+    CPU #7: pc=0xffffffff810301f1 (halted) thread_id=68584
+
+Oh, i also forgot to mention in the above message that, we have bond each vCPU 
+to different physical CPU in
+host.
+
+Thanks,
+zhanghailiang
+
+
+
+
+--
+To unsubscribe from this list: send the line "unsubscribe kvm" in
+the body of a message to address@hidden
+More majordomo info at
+http://vger.kernel.org/majordomo-info.html
+.
+
+On Tue, 7 Jul 2015 19:43:35 +0800
+zhanghailiang <address@hidden> wrote:
+
+>
+On 2015/7/7 19:23, Igor Mammedov wrote:
+>
+> On Mon, 6 Jul 2015 17:59:10 +0800
+>
+> zhanghailiang <address@hidden> wrote:
+>
+>
+>
+>> On 2015/7/6 16:45, Paolo Bonzini wrote:
+>
+>>>
+>
+>>>
+>
+>>> On 06/07/2015 09:54, zhanghailiang wrote:
+>
+>>>>
+>
+>>>>   From host, we found that QEMU vcpu1 thread and vcpu7 thread were not
+>
+>>>> consuming any cpu (Should be in idle state),
+>
+>>>> All of VCPUs' stacks in host is like bellow:
+>
+>>>>
+>
+>>>> [<ffffffffa07089b5>] kvm_vcpu_block+0x65/0xa0 [kvm]
+>
+>>>> [<ffffffffa071c7c1>] __vcpu_run+0xd1/0x260 [kvm]
+>
+>>>> [<ffffffffa071d508>] kvm_arch_vcpu_ioctl_run+0x68/0x1a0 [kvm]
+>
+>>>> [<ffffffffa0709cee>] kvm_vcpu_ioctl+0x38e/0x580 [kvm]
+>
+>>>> [<ffffffff8116be8b>] do_vfs_ioctl+0x8b/0x3b0
+>
+>>>> [<ffffffff8116c251>] sys_ioctl+0xa1/0xb0
+>
+>>>> [<ffffffff81468092>] system_call_fastpath+0x16/0x1b
+>
+>>>> [<00002ab9fe1f99a7>] 0x2ab9fe1f99a7
+>
+>>>> [<ffffffffffffffff>] 0xffffffffffffffff
+>
+>>>>
+>
+>>>> We looked into the kernel codes that could leading to the above 'Stuck'
+>
+>>>> warning,
+>
+> in current upstream there isn't any printk(...Stuck...) left since that
+>
+> code path
+>
+> has been reworked.
+>
+> I've often seen this on over-committed host during guest CPUs up/down
+>
+> torture test.
+>
+> Could you update guest kernel to upstream and see if issue reproduces?
+>
+>
+>
+>
+Hmm, Unfortunately, it is very hard to reproduce, and we are still trying to
+>
+reproduce it.
+>
+>
+For your test case, is it a kernel bug?
+>
+Or is there any related patch could solve your test problem been merged into
+>
+upstream ?
+I don't remember all prerequisite patches but you should be able to find
+http://marc.info/?l=linux-kernel&m=140326703108009&w=2
+"x86/smpboot: Initialize secondary CPU only if master CPU will wait for it"
+and then look for dependencies.
+
+
+>
+>
+Thanks,
+>
+zhanghailiang
+>
+>
+>>>> and found that the only possible is the emulation of 'cpuid' instruct in
+>
+>>>> kvm/qemu has something wrong.
+>
+>>>> But since we can’t reproduce this problem, we are not quite sure.
+>
+>>>> Is there any possible that the cupid emulation in kvm/qemu has some bug ?
+>
+>>>
+>
+>>> Can you explain the relationship to the cpuid emulation?  What do the
+>
+>>> traces say about vcpus 1 and 7?
+>
+>>
+>
+>> OK, we searched the VM's kernel codes with the 'Stuck' message, and  it is
+>
+>> located in
+>
+>> do_boot_cpu(). It's in BSP context, the call process is:
+>
+>> BSP executes start_kernel() -> smp_init() -> smp_boot_cpus() ->
+>
+>> do_boot_cpu() -> wakeup_secondary_via_INIT() to trigger APs.
+>
+>> It will wait 5s for APs to startup, if some AP not startup normally, it
+>
+>> will print 'CPU%d Stuck' or 'CPU%d: Not responding'.
+>
+>>
+>
+>> If it prints 'Stuck', it means the AP has received the SIPI interrupt and
+>
+>> begins to execute the code
+>
+>> 'ENTRY(trampoline_data)' (trampoline_64.S) , but be stuck in some places
+>
+>> before smp_callin()(smpboot.c).
+>
+>> The follow is the starup process of BSP and AP.
+>
+>> BSP:
+>
+>> start_kernel()
+>
+>>     ->smp_init()
+>
+>>        ->smp_boot_cpus()
+>
+>>          ->do_boot_cpu()
+>
+>>              ->start_ip = trampoline_address(); //set the address that AP
+>
+>> will go to execute
+>
+>>              ->wakeup_secondary_cpu_via_init(); // kick the secondary CPU
+>
+>>              ->for (timeout = 0; timeout < 50000; timeout++)
+>
+>>                  if (cpumask_test_cpu(cpu, cpu_callin_mask)) break;//
+>
+>> check if AP startup or not
+>
+>>
+>
+>> APs:
+>
+>> ENTRY(trampoline_data) (trampoline_64.S)
+>
+>>         ->ENTRY(secondary_startup_64) (head_64.S)
+>
+>>            ->start_secondary() (smpboot.c)
+>
+>>               ->cpu_init();
+>
+>>               ->smp_callin();
+>
+>>                   ->cpumask_set_cpu(cpuid, cpu_callin_mask); ->Note: if AP
+>
+>> comes here, the BSP will not prints the error message.
+>
+>>
+>
+>>   From above call process, we can be sure that, the AP has been stuck
+>
+>> between trampoline_data and the cpumask_set_cpu() in
+>
+>> smp_callin(), we look through these codes path carefully, and only found a
+>
+>> 'hlt' instruct that could block the process.
+>
+>> It is located in trampoline_data():
+>
+>>
+>
+>> ENTRY(trampoline_data)
+>
+>>           ...
+>
+>>
+>
+>>    call    verify_cpu              # Verify the cpu supports long mode
+>
+>>    testl   %eax, %eax              # Check for return code
+>
+>>    jnz     no_longmode
+>
+>>
+>
+>>           ...
+>
+>>
+>
+>> no_longmode:
+>
+>>    hlt
+>
+>>    jmp no_longmode
+>
+>>
+>
+>> For the process verify_cpu(),
+>
+>> we can only find the 'cpuid' sensitive instruct that could lead VM exit
+>
+>> from No-root mode.
+>
+>> This is why we doubt if cpuid emulation is wrong in KVM/QEMU that leading
+>
+>> to the fail in verify_cpu.
+>
+>>
+>
+>>   From the message in VM, we know vcpu1 and vcpu7 is something wrong.
+>
+>> [    5.060042] CPU1: Stuck ??
+>
+>> [   10.170815] CPU7: Stuck ??
+>
+>> [   10.171648] Brought up 6 CPUs
+>
+>>
+>
+>> Besides, the follow is the cpus message got from host.
+>
+>> 80FF72F5-FF6D-E411-A8C8-000000821800:/home/fsp/hrg # virsh
+>
+>> qemu-monitor-command instance-0000000
+>
+>> * CPU #0: pc=0x00007f64160c683d thread_id=68570
+>
+>>     CPU #1: pc=0xffffffff810301f1 (halted) thread_id=68573
+>
+>>     CPU #2: pc=0xffffffff810301e2 (halted) thread_id=68575
+>
+>>     CPU #3: pc=0xffffffff810301e2 (halted) thread_id=68576
+>
+>>     CPU #4: pc=0xffffffff810301e2 (halted) thread_id=68577
+>
+>>     CPU #5: pc=0xffffffff810301e2 (halted) thread_id=68578
+>
+>>     CPU #6: pc=0xffffffff810301e2 (halted) thread_id=68583
+>
+>>     CPU #7: pc=0xffffffff810301f1 (halted) thread_id=68584
+>
+>>
+>
+>> Oh, i also forgot to mention in the above message that, we have bond each
+>
+>> vCPU to different physical CPU in
+>
+>> host.
+>
+>>
+>
+>> Thanks,
+>
+>> zhanghailiang
+>
+>>
+>
+>>
+>
+>>
+>
+>>
+>
+>> --
+>
+>> To unsubscribe from this list: send the line "unsubscribe kvm" in
+>
+>> the body of a message to address@hidden
+>
+>> More majordomo info at
+http://vger.kernel.org/majordomo-info.html
+>
+>
+>
+>
+>
+> .
+>
+>
+>
+>
+>
+
+On 2015/7/7 20:21, Igor Mammedov wrote:
+On Tue, 7 Jul 2015 19:43:35 +0800
+zhanghailiang <address@hidden> wrote:
+On 2015/7/7 19:23, Igor Mammedov wrote:
+On Mon, 6 Jul 2015 17:59:10 +0800
+zhanghailiang <address@hidden> wrote:
+On 2015/7/6 16:45, Paolo Bonzini wrote:
+On 06/07/2015 09:54, zhanghailiang wrote:
+From host, we found that QEMU vcpu1 thread and vcpu7 thread were not
+consuming any cpu (Should be in idle state),
+All of VCPUs' stacks in host is like bellow:
+
+[<ffffffffa07089b5>] kvm_vcpu_block+0x65/0xa0 [kvm]
+[<ffffffffa071c7c1>] __vcpu_run+0xd1/0x260 [kvm]
+[<ffffffffa071d508>] kvm_arch_vcpu_ioctl_run+0x68/0x1a0 [kvm]
+[<ffffffffa0709cee>] kvm_vcpu_ioctl+0x38e/0x580 [kvm]
+[<ffffffff8116be8b>] do_vfs_ioctl+0x8b/0x3b0
+[<ffffffff8116c251>] sys_ioctl+0xa1/0xb0
+[<ffffffff81468092>] system_call_fastpath+0x16/0x1b
+[<00002ab9fe1f99a7>] 0x2ab9fe1f99a7
+[<ffffffffffffffff>] 0xffffffffffffffff
+
+We looked into the kernel codes that could leading to the above 'Stuck'
+warning,
+in current upstream there isn't any printk(...Stuck...) left since that code 
+path
+has been reworked.
+I've often seen this on over-committed host during guest CPUs up/down torture 
+test.
+Could you update guest kernel to upstream and see if issue reproduces?
+Hmm, Unfortunately, it is very hard to reproduce, and we are still trying to 
+reproduce it.
+
+For your test case, is it a kernel bug?
+Or is there any related patch could solve your test problem been merged into
+upstream ?
+I don't remember all prerequisite patches but you should be able to find
+http://marc.info/?l=linux-kernel&m=140326703108009&w=2
+"x86/smpboot: Initialize secondary CPU only if master CPU will wait for it"
+and then look for dependencies.
+Er, we have investigated this patch, and it is not related to our problem, :)
+
+Thanks.
+Thanks,
+zhanghailiang
+and found that the only possible is the emulation of 'cpuid' instruct in
+kvm/qemu has something wrong.
+But since we can’t reproduce this problem, we are not quite sure.
+Is there any possible that the cupid emulation in kvm/qemu has some bug ?
+Can you explain the relationship to the cpuid emulation?  What do the
+traces say about vcpus 1 and 7?
+OK, we searched the VM's kernel codes with the 'Stuck' message, and  it is 
+located in
+do_boot_cpu(). It's in BSP context, the call process is:
+BSP executes start_kernel() -> smp_init() -> smp_boot_cpus() -> do_boot_cpu() 
+-> wakeup_secondary_via_INIT() to trigger APs.
+It will wait 5s for APs to startup, if some AP not startup normally, it will 
+print 'CPU%d Stuck' or 'CPU%d: Not responding'.
+
+If it prints 'Stuck', it means the AP has received the SIPI interrupt and 
+begins to execute the code
+'ENTRY(trampoline_data)' (trampoline_64.S) , but be stuck in some places before 
+smp_callin()(smpboot.c).
+The follow is the starup process of BSP and AP.
+BSP:
+start_kernel()
+     ->smp_init()
+        ->smp_boot_cpus()
+          ->do_boot_cpu()
+              ->start_ip = trampoline_address(); //set the address that AP will 
+go to execute
+              ->wakeup_secondary_cpu_via_init(); // kick the secondary CPU
+              ->for (timeout = 0; timeout < 50000; timeout++)
+                  if (cpumask_test_cpu(cpu, cpu_callin_mask)) break;// check if 
+AP startup or not
+
+APs:
+ENTRY(trampoline_data) (trampoline_64.S)
+         ->ENTRY(secondary_startup_64) (head_64.S)
+            ->start_secondary() (smpboot.c)
+               ->cpu_init();
+               ->smp_callin();
+                   ->cpumask_set_cpu(cpuid, cpu_callin_mask); ->Note: if AP 
+comes here, the BSP will not prints the error message.
+
+   From above call process, we can be sure that, the AP has been stuck between 
+trampoline_data and the cpumask_set_cpu() in
+smp_callin(), we look through these codes path carefully, and only found a 
+'hlt' instruct that could block the process.
+It is located in trampoline_data():
+
+ENTRY(trampoline_data)
+           ...
+
+        call    verify_cpu              # Verify the cpu supports long mode
+        testl   %eax, %eax              # Check for return code
+        jnz     no_longmode
+
+           ...
+
+no_longmode:
+        hlt
+        jmp no_longmode
+
+For the process verify_cpu(),
+we can only find the 'cpuid' sensitive instruct that could lead VM exit from 
+No-root mode.
+This is why we doubt if cpuid emulation is wrong in KVM/QEMU that leading to 
+the fail in verify_cpu.
+
+   From the message in VM, we know vcpu1 and vcpu7 is something wrong.
+[    5.060042] CPU1: Stuck ??
+[   10.170815] CPU7: Stuck ??
+[   10.171648] Brought up 6 CPUs
+
+Besides, the follow is the cpus message got from host.
+80FF72F5-FF6D-E411-A8C8-000000821800:/home/fsp/hrg # virsh qemu-monitor-command 
+instance-0000000
+* CPU #0: pc=0x00007f64160c683d thread_id=68570
+     CPU #1: pc=0xffffffff810301f1 (halted) thread_id=68573
+     CPU #2: pc=0xffffffff810301e2 (halted) thread_id=68575
+     CPU #3: pc=0xffffffff810301e2 (halted) thread_id=68576
+     CPU #4: pc=0xffffffff810301e2 (halted) thread_id=68577
+     CPU #5: pc=0xffffffff810301e2 (halted) thread_id=68578
+     CPU #6: pc=0xffffffff810301e2 (halted) thread_id=68583
+     CPU #7: pc=0xffffffff810301f1 (halted) thread_id=68584
+
+Oh, i also forgot to mention in the above message that, we have bond each vCPU 
+to different physical CPU in
+host.
+
+Thanks,
+zhanghailiang
+
+
+
+
+--
+To unsubscribe from this list: send the line "unsubscribe kvm" in
+the body of a message to address@hidden
+More majordomo info at
+http://vger.kernel.org/majordomo-info.html
+.
+.
+
diff --git a/results/classifier/009/other/32484936 b/results/classifier/009/other/32484936
new file mode 100644
index 000000000..f849e1e5e
--- /dev/null
+++ b/results/classifier/009/other/32484936
@@ -0,0 +1,233 @@
+other: 0.856
+PID: 0.839
+semantic: 0.832
+vnc: 0.830
+device: 0.830
+socket: 0.829
+permissions: 0.826
+debug: 0.825
+files: 0.816
+performance: 0.815
+graphic: 0.813
+network: 0.811
+boot: 0.810
+KVM: 0.793
+
+[Qemu-devel] [Snapshot Bug?]Qcow2 meta data corruption
+
+Hi all,
+There was a problem about qcow2 image file happened in my serval vms and I could not figure it out,
+so have to ask for some help.
+Here is the thing:
+At first, I found there were some data corruption in a vm, so I did qemu-img check to all my vms.
+parts of check report:
+3-Leaked cluster 2926229 refcount=1 reference=0
+4-Leaked cluster 3021181 refcount=1 reference=0
+5-Leaked cluster 3021182 refcount=1 reference=0
+6-Leaked cluster 3021183 refcount=1 reference=0
+7-Leaked cluster 3021184 refcount=1 reference=0
+8-ERROR cluster 3102547 refcount=3 reference=4
+9-ERROR cluster 3111536 refcount=3 reference=4
+10-ERROR cluster 3113369 refcount=3 reference=4
+11-ERROR cluster 3235590 refcount=10 reference=11
+12-ERROR cluster 3235591 refcount=10 reference=11
+423-Warning: cluster offset=0xc000c00020000 is after the end of the image file, can't properly check refcounts.
+424-Warning: cluster offset=0xc000c000c0000 is after the end of the image file, can't properly check refcounts.
+425-Warning: cluster offset=0xc0001000c0000 is after the end of the image file, can't properly check refcounts.
+426-Warning: cluster offset=0xc000c000c0000 is after the end of the image file, can't properly check refcounts.
+427-Warning: cluster offset=0xc000c000c0000 is after the end of the image file, can't properly check refcounts.
+428-Warning: cluster offset=0xc000c000c0000 is after the end of the image file, can't properly check refcounts.
+429-Warning: cluster offset=0xc000c000c0000 is after the end of the image file, can't properly check refcounts.
+430-Warning: cluster offset=0xc000c00010000 is after the end of the image file, can't properly check refcounts.
+After a futher look in, I found two l2 entries point to the same cluster, and that was found in serval qcow2 image files of different vms.
+Like this:
+table entry conflict (with our qcow2 check 
+tool):
+a table offset : 0x00000093f7080000 level : 2, l1 table entry 100, l2 table entry 7
+b table offset : 0x00000093f7080000 level : 2, l1 table entry 5, l2 table entry 7
+table entry conflict :
+a table offset : 0x00000000a01e0000 level : 2, l1 table entry 100, l2 table entry 19
+b table offset : 0x00000000a01e0000 level : 2, l1 table entry 5, l2 table entry 19
+table entry conflict :
+a table offset : 0x00000000a01d0000 level : 2, l1 table entry 100, l2 table entry 18
+b table offset : 0x00000000a01d0000 level : 2, l1 table entry 5, l2 table entry 18
+table entry conflict :
+a table offset : 0x00000000a01c0000 level : 2, l1 table entry 100, l2 table entry 17
+b table offset : 0x00000000a01c0000 level : 2, l1 table entry 5, l2 table entry 17
+table entry conflict :
+a table offset : 0x00000000a01b0000 level : 2, l1 table entry 100, l2 table entry 16
+b table offset : 0x00000000a01b0000 level : 2, l1 table entry 5, l2 table entry 16
+I think the problem is relate to the snapshot create, delete. But I cant reproduce it .
+Can Anyone give a hint about how this happen?
+Qemu version 2.0.1, I download the source code and make install it.
+Qemu parameters:
+/usr/bin/kvm -chardev socket,id=qmp,path=/var/run/qemu-server/5855899639838.qmp,server,nowait -mon chardev=qmp,mode=control -vnc :0,websocket,to=200 -enable-kvm -pidfile /var/run/qemu-server/5855899639838.pid -daemonize -name yfMailSvr-200.200.0.14 -smp sockets=1,cores=4 -cpu core2duo,hv_spinlocks=0xffff,hv_relaxed,hv_time,hv_vapic,+sse4.1,+sse4.2,+x2apic,+erms,+smep,+fsgsbase,+f16c,+dca,+pcid,+pdcm,+xtpr,+ht,+ss,+acpi,+ds -nodefaults -vga cirrus -k en-us -boot menu=on,splash-time=8000 -m 8192 -usb -drive if=none,id=drive-ide0,media=cdrom,aio=native -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0 -drive file=/sf/data/36b82a720d3a278001ba904e80c20c13e_ecf4bbbf3e94/images/host-ecf4bbbf3e94/784f3f08532a/yfMailSvr-200.200.0.14.vm/vm-disk-1.qcow2,if=none,id=drive-virtio1,cache=none,aio=native -device virtio-blk-pci,drive=drive-virtio1,id=virtio1,bus=pci.0,addr=0xb -drive file=/sf/data/36b82a720d3a278001ba904e80c20c13e_ecf4bbbf3e94/images/host-ecf4bbbf3e94/784f3f08532a/yfMailSvr-200.200.0.14.vm/vm-disk-2.qcow2,if=none,id=drive-virtio2,cache=none,aio=native -device virtio-blk-pci,drive=drive-virtio2,id=virtio2,bus=pci.0,addr=0xc,bootindex=101 -netdev type=tap,id=net0,ifname=585589963983800,script=/sf/etc/kvm/vtp-bridge,vhost=on,vhostforce=on -device virtio-net-pci,romfile=,mac=FE:FC:FE:F0:AB:BA,netdev=net0,bus=pci.0,addr=0x12,id=net0 -rtc driftfix=slew,clock=rt,base=localtime -global kvm-pit.lost_tick_policy=discard -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1
+Thanks
+Sangfor VT.
+leijian
+
+Hi all,
+There was a problem about qcow2 image file happened in my serval vms and I could not figure it out,
+so have to ask for some help.
+Here is the thing:
+At first, I found there were some data corruption in a vm, so I did qemu-img check to all my vms.
+parts of check report:
+3-Leaked cluster 2926229 refcount=1 reference=0
+4-Leaked cluster 3021181 refcount=1 reference=0
+5-Leaked cluster 3021182 refcount=1 reference=0
+6-Leaked cluster 3021183 refcount=1 reference=0
+7-Leaked cluster 3021184 refcount=1 reference=0
+8-ERROR cluster 3102547 refcount=3 reference=4
+9-ERROR cluster 3111536 refcount=3 reference=4
+10-ERROR cluster 3113369 refcount=3 reference=4
+11-ERROR cluster 3235590 refcount=10 reference=11
+12-ERROR cluster 3235591 refcount=10 reference=11
+423-Warning: cluster offset=0xc000c00020000 is after the end of the image file, can't properly check refcounts.
+424-Warning: cluster offset=0xc000c000c0000 is after the end of the image file, can't properly check refcounts.
+425-Warning: cluster offset=0xc0001000c0000 is after the end of the image file, can't properly check refcounts.
+426-Warning: cluster offset=0xc000c000c0000 is after the end of the image file, can't properly check refcounts.
+427-Warning: cluster offset=0xc000c000c0000 is after the end of the image file, can't properly check refcounts.
+428-Warning: cluster offset=0xc000c000c0000 is after the end of the image file, can't properly check refcounts.
+429-Warning: cluster offset=0xc000c000c0000 is after the end of the image file, can't properly check refcounts.
+430-Warning: cluster offset=0xc000c00010000 is after the end of the image file, can't properly check refcounts.
+After a futher look in, I found two l2 entries point to the same cluster, and that was found in serval qcow2 image files of different vms.
+Like this:
+table entry conflict (with our qcow2 check 
+tool):
+a table offset : 0x00000093f7080000 level : 2, l1 table entry 100, l2 table entry 7
+b table offset : 0x00000093f7080000 level : 2, l1 table entry 5, l2 table entry 7
+table entry conflict :
+a table offset : 0x00000000a01e0000 level : 2, l1 table entry 100, l2 table entry 19
+b table offset : 0x00000000a01e0000 level : 2, l1 table entry 5, l2 table entry 19
+table entry conflict :
+a table offset : 0x00000000a01d0000 level : 2, l1 table entry 100, l2 table entry 18
+b table offset : 0x00000000a01d0000 level : 2, l1 table entry 5, l2 table entry 18
+table entry conflict :
+a table offset : 0x00000000a01c0000 level : 2, l1 table entry 100, l2 table entry 17
+b table offset : 0x00000000a01c0000 level : 2, l1 table entry 5, l2 table entry 17
+table entry conflict :
+a table offset : 0x00000000a01b0000 level : 2, l1 table entry 100, l2 table entry 16
+b table offset : 0x00000000a01b0000 level : 2, l1 table entry 5, l2 table entry 16
+I think the problem is relate to the snapshot create, delete. But I cant reproduce it .
+Can Anyone give a hint about how this happen?
+Qemu version 2.0.1, I download the source code and make install it.
+Qemu parameters:
+/usr/bin/kvm -chardev socket,id=qmp,path=/var/run/qemu-server/5855899639838.qmp,server,nowait -mon chardev=qmp,mode=control -vnc :0,websocket,to=200 -enable-kvm -pidfile /var/run/qemu-server/5855899639838.pid -daemonize -name yfMailSvr-200.200.0.14 -smp sockets=1,cores=4 -cpu core2duo,hv_spinlocks=0xffff,hv_relaxed,hv_time,hv_vapic,+sse4.1,+sse4.2,+x2apic,+erms,+smep,+fsgsbase,+f16c,+dca,+pcid,+pdcm,+xtpr,+ht,+ss,+acpi,+ds -nodefaults -vga cirrus -k en-us -boot menu=on,splash-time=8000 -m 8192 -usb -drive if=none,id=drive-ide0,media=cdrom,aio=native -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0 -drive file=/sf/data/36b82a720d3a278001ba904e80c20c13e_ecf4bbbf3e94/images/host-ecf4bbbf3e94/784f3f08532a/yfMailSvr-200.200.0.14.vm/vm-disk-1.qcow2,if=none,id=drive-virtio1,cache=none,aio=native -device virtio-blk-pci,drive=drive-virtio1,id=virtio1,bus=pci.0,addr=0xb -drive file=/sf/data/36b82a720d3a278001ba904e80c20c13e_ecf4bbbf3e94/images/host-ecf4bbbf3e94/784f3f08532a/yfMailSvr-200.200.0.14.vm/vm-disk-2.qcow2,if=none,id=drive-virtio2,cache=none,aio=native -device virtio-blk-pci,drive=drive-virtio2,id=virtio2,bus=pci.0,addr=0xc,bootindex=101 -netdev type=tap,id=net0,ifname=585589963983800,script=/sf/etc/kvm/vtp-bridge,vhost=on,vhostforce=on -device virtio-net-pci,romfile=,mac=FE:FC:FE:F0:AB:BA,netdev=net0,bus=pci.0,addr=0x12,id=net0 -rtc driftfix=slew,clock=rt,base=localtime -global kvm-pit.lost_tick_policy=discard -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1
+Thanks
+Sangfor VT.
+leijian
+
+Am 03.04.2015 um 12:04 hat leijian geschrieben:
+>
+Hi all,
+>
+>
+There was a problem about qcow2 image file happened in my serval vms and I
+>
+could not figure it out,
+>
+so have to ask for some help.
+>
+[...]
+>
+I think the problem is relate to the snapshot create, delete. But I cant
+>
+reproduce it .
+>
+Can Anyone give a hint about how this happen?
+How did you create/delete your snapshots?
+
+More specifically, did you take care to never access your image from
+more than one process (except if both are read-only)? It happens
+occasionally that people use 'qemu-img snapshot' while the VM is
+running. This is wrong and can corrupt the image.
+
+Kevin
+
+On 04/07/2015 03:33 AM, Kevin Wolf wrote:
+>
+More specifically, did you take care to never access your image from
+>
+more than one process (except if both are read-only)? It happens
+>
+occasionally that people use 'qemu-img snapshot' while the VM is
+>
+running. This is wrong and can corrupt the image.
+Since this has been done by more than one person, I'm wondering if there
+is something we can do in the qcow2 format itself to make it harder for
+the casual user to cause corruption.  Maybe if we declare some bit or
+extension header for an image open for writing, which other readers can
+use as a warning ("this image is being actively modified; reading it may
+fail"), and other writers can use to deny access ("another process is
+already modifying this image"), where a writer should set that bit
+before writing anything else in the file, then clear it on exit.  Of
+course, you'd need a way to override the bit to actively clear it to
+recover from the case of a writer dying unexpectedly without resetting
+it normally.  And it won't help the case of a reader opening the file
+first, followed by a writer, where the reader could still get thrown off
+track.
+
+Or maybe we could document in the qcow2 format that all readers and
+writers should attempt to obtain the appropriate flock() permissions [or
+other appropriate advisory locking scheme] over the file header, so that
+cooperating processes that both use advisory locking will know when the
+file is in use by another process.
+
+-- 
+Eric Blake   eblake redhat com    +1-919-301-3266
+Libvirt virtualization library
+http://libvirt.org
+signature.asc
+Description:
+OpenPGP digital signature
+
+
+I created/deleted the snapshot by using qmp command "snapshot_blkdev_internal"/"snapshot_delete_blkdev_internal", and for avoiding the case you mentioned above, I have added the flock() permission in the qemu_open().
+Here is the test of doing qemu-img snapshot to a running vm:
+Diskfile:/sf/data/36c81f660e38b3b001b183da50b477d89_f8bc123b3e74/images/host-f8bc123b3e74/4a8d8728fcdc/Devried30030.vm/vm-disk-1.qcow2 is used! errno=Resource temporarily unavailable
+Does the two cluster entry happen to be the same because of the refcount of using cluster decrease to 0 unexpectedly and  is allocated again?
+If it was not accessing the image from more than one process, any other exceptions I can test for?
+Thanks
+leijian
+From:
+Eric Blake
+Date:
+2015-04-07 23:27
+To:
+Kevin Wolf
+;
+leijian
+CC:
+qemu-devel
+;
+stefanha
+Subject:
+Re: [Qemu-devel] [Snapshot Bug?]Qcow2 meta data 
+corruption
+On 04/07/2015 03:33 AM, Kevin Wolf wrote:
+> More specifically, did you take care to never access your image from
+> more than one process (except if both are read-only)? It happens
+> occasionally that people use 'qemu-img snapshot' while the VM is
+> running. This is wrong and can corrupt the image.
+Since this has been done by more than one person, I'm wondering if there
+is something we can do in the qcow2 format itself to make it harder for
+the casual user to cause corruption.  Maybe if we declare some bit or
+extension header for an image open for writing, which other readers can
+use as a warning ("this image is being actively modified; reading it may
+fail"), and other writers can use to deny access ("another process is
+already modifying this image"), where a writer should set that bit
+before writing anything else in the file, then clear it on exit.  Of
+course, you'd need a way to override the bit to actively clear it to
+recover from the case of a writer dying unexpectedly without resetting
+it normally.  And it won't help the case of a reader opening the file
+first, followed by a writer, where the reader could still get thrown off
+track.
+Or maybe we could document in the qcow2 format that all readers and
+writers should attempt to obtain the appropriate flock() permissions [or
+other appropriate advisory locking scheme] over the file header, so that
+cooperating processes that both use advisory locking will know when the
+file is in use by another process.
+--
+Eric Blake   eblake redhat com    +1-919-301-3266
+Libvirt virtualization library http://libvirt.org
+
diff --git a/results/classifier/009/other/35170175 b/results/classifier/009/other/35170175
new file mode 100644
index 000000000..50cc6a327
--- /dev/null
+++ b/results/classifier/009/other/35170175
@@ -0,0 +1,531 @@
+other: 0.933
+permissions: 0.870
+graphic: 0.844
+debug: 0.830
+performance: 0.818
+semantic: 0.798
+device: 0.787
+boot: 0.719
+KVM: 0.709
+files: 0.706
+PID: 0.699
+socket: 0.681
+network: 0.666
+vnc: 0.633
+
+[Qemu-devel] [BUG] QEMU crashes with dpdk virtio pmd
+
+Qemu crashes, with pre-condition:
+vm xml config with multiqueue, and the vm's driver virtio-net support 
+multi-queue
+
+reproduce steps:
+i. start dpdk testpmd in VM with the virtio nic
+ii. stop testpmd
+iii. reboot the VM
+
+This commit "f9d6dbf0  remove virtio queues if the guest doesn't support 
+multiqueue" is introduced.
+
+Qemu version: QEMU emulator version 2.9.50 (v2.9.0-137-g32c7e0a)
+VM DPDK version:  DPDK-1.6.1
+
+Call Trace:
+#0  0x00007f60881fe5d7 in raise () from /usr/lib64/libc.so.6
+#1  0x00007f60881ffcc8 in abort () from /usr/lib64/libc.so.6
+#2  0x00007f608823e2f7 in __libc_message () from /usr/lib64/libc.so.6
+#3  0x00007f60882456d3 in _int_free () from /usr/lib64/libc.so.6
+#4  0x00007f608900158f in g_free () from /usr/lib64/libglib-2.0.so.0
+#5  0x00007f6088fea32c in iter_remove_or_steal () from 
+/usr/lib64/libglib-2.0.so.0
+#6  0x00007f608edc0986 in object_property_del_all (obj=0x7f6091e74800) at 
+qom/object.c:410
+#7  object_finalize (data=0x7f6091e74800) at qom/object.c:467
+#8  object_unref (address@hidden) at qom/object.c:903
+#9  0x00007f608eaf1fd3 in phys_section_destroy (mr=0x7f6091e74800) at 
+git/qemu/exec.c:1154
+#10 phys_sections_free (map=0x7f6090b72bb0) at git/qemu/exec.c:1163
+#11 address_space_dispatch_free (d=0x7f6090b72b90) at git/qemu/exec.c:2514
+#12 0x00007f608ee91ace in call_rcu_thread (opaque=<optimized out>) at 
+util/rcu.c:272
+#13 0x00007f6089b0ddc5 in start_thread () from /usr/lib64/libpthread.so.0
+#14 0x00007f60882bf71d in clone () from /usr/lib64/libc.so.6
+
+Call Trace:
+#0  0x00007fdccaeb9790 in ?? ()
+#1  0x00007fdcd82d09fc in object_property_del_all (obj=0x7fdcdb8acf60) at 
+qom/object.c:405
+#2  object_finalize (data=0x7fdcdb8acf60) at qom/object.c:467
+#3  object_unref (address@hidden) at qom/object.c:903
+#4  0x00007fdcd8001fd3 in phys_section_destroy (mr=0x7fdcdb8acf60) at 
+git/qemu/exec.c:1154
+#5  phys_sections_free (map=0x7fdcdc86aa00) at git/qemu/exec.c:1163
+#6  address_space_dispatch_free (d=0x7fdcdc86a9e0) at git/qemu/exec.c:2514
+#7  0x00007fdcd83a1ace in call_rcu_thread (opaque=<optimized out>) at 
+util/rcu.c:272
+#8  0x00007fdcd301ddc5 in start_thread () from /usr/lib64/libpthread.so.0
+#9  0x00007fdcd17cf71d in clone () from /usr/lib64/libc.so.6
+
+The q->tx_bh will free in virtio_net_del_queue() function, when remove virtio 
+queues 
+if the guest doesn't support multiqueue. But it might be still referenced by 
+others (eg . virtio_net_set_status()),
+which need so set NULL.
+
+diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
+index 7d091c9..98bd683 100644
+--- a/hw/net/virtio-net.c
++++ b/hw/net/virtio-net.c
+@@ -1522,9 +1522,12 @@ static void virtio_net_del_queue(VirtIONet *n, int index)
+     if (q->tx_timer) {
+         timer_del(q->tx_timer);
+         timer_free(q->tx_timer);
++        q->tx_timer = NULL;
+     } else {
+         qemu_bh_delete(q->tx_bh);
++        q->tx_bh = NULL;
+     }
++    q->tx_waiting = 0;
+     virtio_del_queue(vdev, index * 2 + 1);
+ }
+
+From: wangyunjian 
+Sent: Monday, April 24, 2017 6:10 PM
+To: address@hidden; Michael S. Tsirkin <address@hidden>; 'Jason Wang' 
+<address@hidden>
+Cc: wangyunjian <address@hidden>; caihe <address@hidden>
+Subject: [Qemu-devel][BUG] QEMU crashes with dpdk virtio pmd 
+
+Qemu crashes, with pre-condition:
+vm xml config with multiqueue, and the vm's driver virtio-net support 
+multi-queue
+
+reproduce steps:
+i. start dpdk testpmd in VM with the virtio nic
+ii. stop testpmd
+iii. reboot the VM
+
+This commit "f9d6dbf0  remove virtio queues if the guest doesn't support 
+multiqueue" is introduced.
+
+Qemu version: QEMU emulator version 2.9.50 (v2.9.0-137-g32c7e0a)
+VM DPDK version:  DPDK-1.6.1
+
+Call Trace:
+#0  0x00007f60881fe5d7 in raise () from /usr/lib64/libc.so.6
+#1  0x00007f60881ffcc8 in abort () from /usr/lib64/libc.so.6
+#2  0x00007f608823e2f7 in __libc_message () from /usr/lib64/libc.so.6
+#3  0x00007f60882456d3 in _int_free () from /usr/lib64/libc.so.6
+#4  0x00007f608900158f in g_free () from /usr/lib64/libglib-2.0.so.0
+#5  0x00007f6088fea32c in iter_remove_or_steal () from 
+/usr/lib64/libglib-2.0.so.0
+#6  0x00007f608edc0986 in object_property_del_all (obj=0x7f6091e74800) at 
+qom/object.c:410
+#7  object_finalize (data=0x7f6091e74800) at qom/object.c:467
+#8  object_unref (address@hidden) at qom/object.c:903
+#9  0x00007f608eaf1fd3 in phys_section_destroy (mr=0x7f6091e74800) at 
+git/qemu/exec.c:1154
+#10 phys_sections_free (map=0x7f6090b72bb0) at git/qemu/exec.c:1163
+#11 address_space_dispatch_free (d=0x7f6090b72b90) at git/qemu/exec.c:2514
+#12 0x00007f608ee91ace in call_rcu_thread (opaque=<optimized out>) at 
+util/rcu.c:272
+#13 0x00007f6089b0ddc5 in start_thread () from /usr/lib64/libpthread.so.0
+#14 0x00007f60882bf71d in clone () from /usr/lib64/libc.so.6
+
+Call Trace:
+#0  0x00007fdccaeb9790 in ?? ()
+#1  0x00007fdcd82d09fc in object_property_del_all (obj=0x7fdcdb8acf60) at 
+qom/object.c:405
+#2  object_finalize (data=0x7fdcdb8acf60) at qom/object.c:467
+#3  object_unref (address@hidden) at qom/object.c:903
+#4  0x00007fdcd8001fd3 in phys_section_destroy (mr=0x7fdcdb8acf60) at 
+git/qemu/exec.c:1154
+#5  phys_sections_free (map=0x7fdcdc86aa00) at git/qemu/exec.c:1163
+#6  address_space_dispatch_free (d=0x7fdcdc86a9e0) at git/qemu/exec.c:2514
+#7  0x00007fdcd83a1ace in call_rcu_thread (opaque=<optimized out>) at 
+util/rcu.c:272
+#8  0x00007fdcd301ddc5 in start_thread () from /usr/lib64/libpthread.so.0
+#9  0x00007fdcd17cf71d in clone () from /usr/lib64/libc.so.6
+
+On 2017年04月25日 19:37, wangyunjian wrote:
+The q->tx_bh will free in virtio_net_del_queue() function, when remove virtio 
+queues
+if the guest doesn't support multiqueue. But it might be still referenced by 
+others (eg . virtio_net_set_status()),
+which need so set NULL.
+
+diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
+index 7d091c9..98bd683 100644
+--- a/hw/net/virtio-net.c
++++ b/hw/net/virtio-net.c
+@@ -1522,9 +1522,12 @@ static void virtio_net_del_queue(VirtIONet *n, int index)
+      if (q->tx_timer) {
+          timer_del(q->tx_timer);
+          timer_free(q->tx_timer);
++        q->tx_timer = NULL;
+      } else {
+          qemu_bh_delete(q->tx_bh);
++        q->tx_bh = NULL;
+      }
++    q->tx_waiting = 0;
+      virtio_del_queue(vdev, index * 2 + 1);
+  }
+Thanks a lot for the fix.
+
+Two questions:
+- If virtio_net_set_status() is the only function that may access tx_bh,
+it looks like setting tx_waiting to zero is sufficient?
+- Can you post a formal patch for this?
+
+Thanks
+From: wangyunjian
+Sent: Monday, April 24, 2017 6:10 PM
+To: address@hidden; Michael S. Tsirkin <address@hidden>; 'Jason Wang' 
+<address@hidden>
+Cc: wangyunjian <address@hidden>; caihe <address@hidden>
+Subject: [Qemu-devel][BUG] QEMU crashes with dpdk virtio pmd
+
+Qemu crashes, with pre-condition:
+vm xml config with multiqueue, and the vm's driver virtio-net support 
+multi-queue
+
+reproduce steps:
+i. start dpdk testpmd in VM with the virtio nic
+ii. stop testpmd
+iii. reboot the VM
+
+This commit "f9d6dbf0  remove virtio queues if the guest doesn't support 
+multiqueue" is introduced.
+
+Qemu version: QEMU emulator version 2.9.50 (v2.9.0-137-g32c7e0a)
+VM DPDK version:  DPDK-1.6.1
+
+Call Trace:
+#0  0x00007f60881fe5d7 in raise () from /usr/lib64/libc.so.6
+#1  0x00007f60881ffcc8 in abort () from /usr/lib64/libc.so.6
+#2  0x00007f608823e2f7 in __libc_message () from /usr/lib64/libc.so.6
+#3  0x00007f60882456d3 in _int_free () from /usr/lib64/libc.so.6
+#4  0x00007f608900158f in g_free () from /usr/lib64/libglib-2.0.so.0
+#5  0x00007f6088fea32c in iter_remove_or_steal () from 
+/usr/lib64/libglib-2.0.so.0
+#6  0x00007f608edc0986 in object_property_del_all (obj=0x7f6091e74800) at 
+qom/object.c:410
+#7  object_finalize (data=0x7f6091e74800) at qom/object.c:467
+#8  object_unref (address@hidden) at qom/object.c:903
+#9  0x00007f608eaf1fd3 in phys_section_destroy (mr=0x7f6091e74800) at 
+git/qemu/exec.c:1154
+#10 phys_sections_free (map=0x7f6090b72bb0) at git/qemu/exec.c:1163
+#11 address_space_dispatch_free (d=0x7f6090b72b90) at git/qemu/exec.c:2514
+#12 0x00007f608ee91ace in call_rcu_thread (opaque=<optimized out>) at 
+util/rcu.c:272
+#13 0x00007f6089b0ddc5 in start_thread () from /usr/lib64/libpthread.so.0
+#14 0x00007f60882bf71d in clone () from /usr/lib64/libc.so.6
+
+Call Trace:
+#0  0x00007fdccaeb9790 in ?? ()
+#1  0x00007fdcd82d09fc in object_property_del_all (obj=0x7fdcdb8acf60) at 
+qom/object.c:405
+#2  object_finalize (data=0x7fdcdb8acf60) at qom/object.c:467
+#3  object_unref (address@hidden) at qom/object.c:903
+#4  0x00007fdcd8001fd3 in phys_section_destroy (mr=0x7fdcdb8acf60) at 
+git/qemu/exec.c:1154
+#5  phys_sections_free (map=0x7fdcdc86aa00) at git/qemu/exec.c:1163
+#6  address_space_dispatch_free (d=0x7fdcdc86a9e0) at git/qemu/exec.c:2514
+#7  0x00007fdcd83a1ace in call_rcu_thread (opaque=<optimized out>) at 
+util/rcu.c:272
+#8  0x00007fdcd301ddc5 in start_thread () from /usr/lib64/libpthread.so.0
+#9  0x00007fdcd17cf71d in clone () from /usr/lib64/libc.so.6
+
+CCing Paolo and Stefan, since it has a relationship with bh in Qemu.
+
+>
+-----Original Message-----
+>
+From: Jason Wang [
+mailto:address@hidden
+>
+>
+>
+On 2017年04月25日 19:37, wangyunjian wrote:
+>
+> The q->tx_bh will free in virtio_net_del_queue() function, when remove
+>
+> virtio
+>
+queues
+>
+> if the guest doesn't support multiqueue. But it might be still referenced by
+>
+others (eg . virtio_net_set_status()),
+>
+> which need so set NULL.
+>
+>
+>
+> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
+>
+> index 7d091c9..98bd683 100644
+>
+> --- a/hw/net/virtio-net.c
+>
+> +++ b/hw/net/virtio-net.c
+>
+> @@ -1522,9 +1522,12 @@ static void virtio_net_del_queue(VirtIONet *n,
+>
+int index)
+>
+>       if (q->tx_timer) {
+>
+>           timer_del(q->tx_timer);
+>
+>           timer_free(q->tx_timer);
+>
+> +        q->tx_timer = NULL;
+>
+>       } else {
+>
+>           qemu_bh_delete(q->tx_bh);
+>
+> +        q->tx_bh = NULL;
+>
+>       }
+>
+> +    q->tx_waiting = 0;
+>
+>       virtio_del_queue(vdev, index * 2 + 1);
+>
+>   }
+>
+>
+Thanks a lot for the fix.
+>
+>
+Two questions:
+>
+>
+- If virtio_net_set_status() is the only function that may access tx_bh,
+>
+it looks like setting tx_waiting to zero is sufficient?
+Currently yes, but we don't assure that it works for all scenarios, so
+we set the tx_bh and tx_timer to NULL to avoid to possibly access wild pointer,
+which is the common method for usage of bh in Qemu.
+
+I have another question about the root cause of this issure.
+
+This below trace is the path of setting tx_waiting to one in 
+virtio_net_handle_tx_bh() :
+
+Breakpoint 1, virtio_net_handle_tx_bh (vdev=0x0, vq=0x7f335ad13900) at 
+/data/wyj/git/qemu/hw/net/virtio-net.c:1398
+1398    {
+(gdb) bt
+#0  virtio_net_handle_tx_bh (vdev=0x0, vq=0x7f335ad13900) at 
+/data/wyj/git/qemu/hw/net/virtio-net.c:1398
+#1  0x00007f3357bddf9c in virtio_bus_set_host_notifier (bus=<optimized out>, 
+address@hidden, address@hidden) at hw/virtio/virtio-bus.c:297
+#2  0x00007f3357a0055d in vhost_dev_disable_notifiers (address@hidden, 
+address@hidden) at /data/wyj/git/qemu/hw/virtio/vhost.c:1422
+#3  0x00007f33579e3373 in vhost_net_stop_one (net=0x7f335ad84dc0, 
+dev=0x7f335c6f5f90) at /data/wyj/git/qemu/hw/net/vhost_net.c:289
+#4  0x00007f33579e385b in vhost_net_stop (address@hidden, ncs=<optimized out>, 
+address@hidden) at /data/wyj/git/qemu/hw/net/vhost_net.c:367
+#5  0x00007f33579e15de in virtio_net_vhost_status (status=<optimized out>, 
+n=0x7f335c6f5f90) at /data/wyj/git/qemu/hw/net/virtio-net.c:176
+#6  virtio_net_set_status (vdev=0x7f335c6f5f90, status=0 '\000') at 
+/data/wyj/git/qemu/hw/net/virtio-net.c:250
+#7  0x00007f33579f8dc6 in virtio_set_status (address@hidden, address@hidden 
+'\000') at /data/wyj/git/qemu/hw/virtio/virtio.c:1146
+#8  0x00007f3357bdd3cc in virtio_ioport_write (val=0, addr=18, 
+opaque=0x7f335c6eda80) at hw/virtio/virtio-pci.c:387
+#9  virtio_pci_config_write (opaque=0x7f335c6eda80, addr=18, val=0, 
+size=<optimized out>) at hw/virtio/virtio-pci.c:511
+#10 0x00007f33579b2155 in memory_region_write_accessor (mr=0x7f335c6ee470, 
+addr=18, value=<optimized out>, size=1, shift=<optimized out>, mask=<optimized 
+out>, attrs=...) at /data/wyj/git/qemu/memory.c:526
+#11 0x00007f33579af2e9 in access_with_adjusted_size (address@hidden, 
+address@hidden, address@hidden, access_size_min=<optimized out>, 
+access_size_max=<optimized out>, address@hidden
+    0x7f33579b20f0 <memory_region_write_accessor>, address@hidden, 
+address@hidden) at /data/wyj/git/qemu/memory.c:592
+#12 0x00007f33579b2e15 in memory_region_dispatch_write (address@hidden, 
+address@hidden, data=0, address@hidden, address@hidden) at 
+/data/wyj/git/qemu/memory.c:1319
+#13 0x00007f335796cd93 in address_space_write_continue (mr=0x7f335c6ee470, l=1, 
+addr1=18, len=1, buf=0x7f335773d000 "", attrs=..., addr=49170, 
+as=0x7f3358317060 <address_space_io>) at /data/wyj/git/qemu/exec.c:2834
+#14 address_space_write (as=<optimized out>, addr=<optimized out>, attrs=..., 
+buf=<optimized out>, len=<optimized out>) at /data/wyj/git/qemu/exec.c:2879
+#15 0x00007f335796d3ad in address_space_rw (as=<optimized out>, address@hidden, 
+attrs=..., address@hidden, buf=<optimized out>, address@hidden, address@hidden) 
+at /data/wyj/git/qemu/exec.c:2981
+#16 0x00007f33579ae226 in kvm_handle_io (count=1, size=1, direction=<optimized 
+out>, data=<optimized out>, attrs=..., port=49170) at 
+/data/wyj/git/qemu/kvm-all.c:1803
+#17 kvm_cpu_exec (address@hidden) at /data/wyj/git/qemu/kvm-all.c:2032
+#18 0x00007f335799b632 in qemu_kvm_cpu_thread_fn (arg=0x7f335ae82070) at 
+/data/wyj/git/qemu/cpus.c:1118
+#19 0x00007f3352983dc5 in start_thread () from /usr/lib64/libpthread.so.0
+#20 0x00007f335113571d in clone () from /usr/lib64/libc.so.6
+
+It calls qemu_bh_schedule(q->tx_bh) at the bottom of virtio_net_handle_tx_bh(),
+I don't know why virtio_net_tx_bh() doesn't be invoked, so that the 
+q->tx_waiting is not zero.
+[ps: we added logs in virtio_net_tx_bh() to verify that]
+
+Some other information: 
+
+It won't crash if we don't use vhost-net.
+
+
+Thanks,
+-Gonglei
+
+>
+- Can you post a formal patch for this?
+>
+>
+Thanks
+>
+>
+> From: wangyunjian
+>
+> Sent: Monday, April 24, 2017 6:10 PM
+>
+> To: address@hidden; Michael S. Tsirkin <address@hidden>; 'Jason
+>
+Wang' <address@hidden>
+>
+> Cc: wangyunjian <address@hidden>; caihe <address@hidden>
+>
+> Subject: [Qemu-devel][BUG] QEMU crashes with dpdk virtio pmd
+>
+>
+>
+> Qemu crashes, with pre-condition:
+>
+> vm xml config with multiqueue, and the vm's driver virtio-net support
+>
+multi-queue
+>
+>
+>
+> reproduce steps:
+>
+> i. start dpdk testpmd in VM with the virtio nic
+>
+> ii. stop testpmd
+>
+> iii. reboot the VM
+>
+>
+>
+> This commit "f9d6dbf0  remove virtio queues if the guest doesn't support
+>
+multiqueue" is introduced.
+>
+>
+>
+> Qemu version: QEMU emulator version 2.9.50 (v2.9.0-137-g32c7e0a)
+>
+> VM DPDK version:  DPDK-1.6.1
+>
+>
+>
+> Call Trace:
+>
+> #0  0x00007f60881fe5d7 in raise () from /usr/lib64/libc.so.6
+>
+> #1  0x00007f60881ffcc8 in abort () from /usr/lib64/libc.so.6
+>
+> #2  0x00007f608823e2f7 in __libc_message () from /usr/lib64/libc.so.6
+>
+> #3  0x00007f60882456d3 in _int_free () from /usr/lib64/libc.so.6
+>
+> #4  0x00007f608900158f in g_free () from /usr/lib64/libglib-2.0.so.0
+>
+> #5  0x00007f6088fea32c in iter_remove_or_steal () from
+>
+/usr/lib64/libglib-2.0.so.0
+>
+> #6  0x00007f608edc0986 in object_property_del_all (obj=0x7f6091e74800)
+>
+at qom/object.c:410
+>
+> #7  object_finalize (data=0x7f6091e74800) at qom/object.c:467
+>
+> #8  object_unref (address@hidden) at qom/object.c:903
+>
+> #9  0x00007f608eaf1fd3 in phys_section_destroy (mr=0x7f6091e74800) at
+>
+git/qemu/exec.c:1154
+>
+> #10 phys_sections_free (map=0x7f6090b72bb0) at git/qemu/exec.c:1163
+>
+> #11 address_space_dispatch_free (d=0x7f6090b72b90) at
+>
+git/qemu/exec.c:2514
+>
+> #12 0x00007f608ee91ace in call_rcu_thread (opaque=<optimized out>) at
+>
+util/rcu.c:272
+>
+> #13 0x00007f6089b0ddc5 in start_thread () from /usr/lib64/libpthread.so.0
+>
+> #14 0x00007f60882bf71d in clone () from /usr/lib64/libc.so.6
+>
+>
+>
+> Call Trace:
+>
+> #0  0x00007fdccaeb9790 in ?? ()
+>
+> #1  0x00007fdcd82d09fc in object_property_del_all (obj=0x7fdcdb8acf60) at
+>
+qom/object.c:405
+>
+> #2  object_finalize (data=0x7fdcdb8acf60) at qom/object.c:467
+>
+> #3  object_unref (address@hidden) at qom/object.c:903
+>
+> #4  0x00007fdcd8001fd3 in phys_section_destroy (mr=0x7fdcdb8acf60) at
+>
+git/qemu/exec.c:1154
+>
+> #5  phys_sections_free (map=0x7fdcdc86aa00) at git/qemu/exec.c:1163
+>
+> #6  address_space_dispatch_free (d=0x7fdcdc86a9e0) at
+>
+git/qemu/exec.c:2514
+>
+> #7  0x00007fdcd83a1ace in call_rcu_thread (opaque=<optimized out>) at
+>
+util/rcu.c:272
+>
+> #8  0x00007fdcd301ddc5 in start_thread () from /usr/lib64/libpthread.so.0
+>
+> #9  0x00007fdcd17cf71d in clone () from /usr/lib64/libc.so.6
+>
+>
+>
+>
+
+On 25/04/2017 14:02, Jason Wang wrote:
+>
+>
+Thanks a lot for the fix.
+>
+>
+Two questions:
+>
+>
+- If virtio_net_set_status() is the only function that may access tx_bh,
+>
+it looks like setting tx_waiting to zero is sufficient?
+I think clearing tx_bh is better anyway, as leaving a dangling pointer
+is not very hygienic.
+
+Paolo
+
+>
+- Can you post a formal patch for this?
+
diff --git a/results/classifier/009/other/42974450 b/results/classifier/009/other/42974450
new file mode 100644
index 000000000..fe2110a2f
--- /dev/null
+++ b/results/classifier/009/other/42974450
@@ -0,0 +1,439 @@
+other: 0.940
+debug: 0.924
+device: 0.921
+permissions: 0.918
+semantic: 0.917
+performance: 0.914
+PID: 0.913
+boot: 0.909
+network: 0.907
+graphic: 0.906
+KVM: 0.901
+socket: 0.899
+files: 0.877
+vnc: 0.869
+
+[Bug Report] Possible Missing Endianness Conversion
+
+The virtio packed virtqueue support patch[1] suggests converting
+endianness by lines:
+
+virtio_tswap16s(vdev, &e->off_wrap);
+virtio_tswap16s(vdev, &e->flags);
+
+Though both of these conversion statements aren't present in the
+latest qemu code here[2]
+
+Is this intentional?
+
+[1]:
+https://mail.gnu.org/archive/html/qemu-block/2019-10/msg01492.html
+[2]:
+https://elixir.bootlin.com/qemu/latest/source/hw/virtio/virtio.c#L314
+
+CCing Jason.
+
+On Mon, Jun 24, 2024 at 4:30 PM Xoykie <xoykie@gmail.com> wrote:
+>
+>
+The virtio packed virtqueue support patch[1] suggests converting
+>
+endianness by lines:
+>
+>
+virtio_tswap16s(vdev, &e->off_wrap);
+>
+virtio_tswap16s(vdev, &e->flags);
+>
+>
+Though both of these conversion statements aren't present in the
+>
+latest qemu code here[2]
+>
+>
+Is this intentional?
+Good catch!
+
+It looks like it was removed (maybe by mistake) by commit
+d152cdd6f6 ("virtio: use virtio accessor to access packed event")
+
+Jason can you confirm that?
+
+Thanks,
+Stefano
+
+>
+>
+[1]:
+https://mail.gnu.org/archive/html/qemu-block/2019-10/msg01492.html
+>
+[2]:
+https://elixir.bootlin.com/qemu/latest/source/hw/virtio/virtio.c#L314
+>
+
+On Mon, 24 Jun 2024 at 16:11, Stefano Garzarella <sgarzare@redhat.com> wrote:
+>
+>
+CCing Jason.
+>
+>
+On Mon, Jun 24, 2024 at 4:30 PM Xoykie <xoykie@gmail.com> wrote:
+>
+>
+>
+> The virtio packed virtqueue support patch[1] suggests converting
+>
+> endianness by lines:
+>
+>
+>
+> virtio_tswap16s(vdev, &e->off_wrap);
+>
+> virtio_tswap16s(vdev, &e->flags);
+>
+>
+>
+> Though both of these conversion statements aren't present in the
+>
+> latest qemu code here[2]
+>
+>
+>
+> Is this intentional?
+>
+>
+Good catch!
+>
+>
+It looks like it was removed (maybe by mistake) by commit
+>
+d152cdd6f6 ("virtio: use virtio accessor to access packed event")
+That commit changes from:
+
+-    address_space_read_cached(cache, off_off, &e->off_wrap,
+-                              sizeof(e->off_wrap));
+-    virtio_tswap16s(vdev, &e->off_wrap);
+
+which does a byte read of 2 bytes and then swaps the bytes
+depending on the host endianness and the value of
+virtio_access_is_big_endian()
+
+to this:
+
++    e->off_wrap = virtio_lduw_phys_cached(vdev, cache, off_off);
+
+virtio_lduw_phys_cached() is a small function which calls
+either lduw_be_phys_cached() or lduw_le_phys_cached()
+depending on the value of virtio_access_is_big_endian().
+(And lduw_be_phys_cached() and lduw_le_phys_cached() do
+the right thing for the host-endianness to do a "load
+a specifically big or little endian 16-bit value".)
+
+Which is to say that because we use a load/store function that's
+explicit about the size of the data type it is accessing, the
+function itself can handle doing the load as big or little
+endian, rather than the calling code having to do a manual swap after
+it has done a load-as-bag-of-bytes. This is generally preferable
+as it's less error-prone.
+
+(Explicit swap-after-loading still has a place where the
+code is doing a load of a whole structure out of the
+guest and then swapping each struct field after the fact,
+because it means we can do a single load-from-guest-memory
+rather than a whole sequence of calls all the way down
+through the memory subsystem.)
+
+thanks
+-- PMM
+
+On Mon, Jun 24, 2024 at 04:19:52PM GMT, Peter Maydell wrote:
+On Mon, 24 Jun 2024 at 16:11, Stefano Garzarella <sgarzare@redhat.com> wrote:
+CCing Jason.
+
+On Mon, Jun 24, 2024 at 4:30 PM Xoykie <xoykie@gmail.com> wrote:
+>
+> The virtio packed virtqueue support patch[1] suggests converting
+> endianness by lines:
+>
+> virtio_tswap16s(vdev, &e->off_wrap);
+> virtio_tswap16s(vdev, &e->flags);
+>
+> Though both of these conversion statements aren't present in the
+> latest qemu code here[2]
+>
+> Is this intentional?
+
+Good catch!
+
+It looks like it was removed (maybe by mistake) by commit
+d152cdd6f6 ("virtio: use virtio accessor to access packed event")
+That commit changes from:
+
+-    address_space_read_cached(cache, off_off, &e->off_wrap,
+-                              sizeof(e->off_wrap));
+-    virtio_tswap16s(vdev, &e->off_wrap);
+
+which does a byte read of 2 bytes and then swaps the bytes
+depending on the host endianness and the value of
+virtio_access_is_big_endian()
+
+to this:
+
++    e->off_wrap = virtio_lduw_phys_cached(vdev, cache, off_off);
+
+virtio_lduw_phys_cached() is a small function which calls
+either lduw_be_phys_cached() or lduw_le_phys_cached()
+depending on the value of virtio_access_is_big_endian().
+(And lduw_be_phys_cached() and lduw_le_phys_cached() do
+the right thing for the host-endianness to do a "load
+a specifically big or little endian 16-bit value".)
+
+Which is to say that because we use a load/store function that's
+explicit about the size of the data type it is accessing, the
+function itself can handle doing the load as big or little
+endian, rather than the calling code having to do a manual swap after
+it has done a load-as-bag-of-bytes. This is generally preferable
+as it's less error-prone.
+Thanks for the details!
+
+So, should we also remove `virtio_tswap16s(vdev, &e->flags);` ?
+
+I mean:
+diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
+index 893a072c9d..2e5e67bdb9 100644
+--- a/hw/virtio/virtio.c
++++ b/hw/virtio/virtio.c
+@@ -323,7 +323,6 @@ static void vring_packed_event_read(VirtIODevice *vdev,
+     /* Make sure flags is seen before off_wrap */
+     smp_rmb();
+     e->off_wrap = virtio_lduw_phys_cached(vdev, cache, off_off);
+-    virtio_tswap16s(vdev, &e->flags);
+ }
+
+ static void vring_packed_off_wrap_write(VirtIODevice *vdev,
+
+Thanks,
+Stefano
+(Explicit swap-after-loading still has a place where the
+code is doing a load of a whole structure out of the
+guest and then swapping each struct field after the fact,
+because it means we can do a single load-from-guest-memory
+rather than a whole sequence of calls all the way down
+through the memory subsystem.)
+
+thanks
+-- PMM
+
+On Tue, 25 Jun 2024 at 08:18, Stefano Garzarella <sgarzare@redhat.com> wrote:
+>
+>
+On Mon, Jun 24, 2024 at 04:19:52PM GMT, Peter Maydell wrote:
+>
+>On Mon, 24 Jun 2024 at 16:11, Stefano Garzarella <sgarzare@redhat.com> wrote:
+>
+>>
+>
+>> CCing Jason.
+>
+>>
+>
+>> On Mon, Jun 24, 2024 at 4:30 PM Xoykie <xoykie@gmail.com> wrote:
+>
+>> >
+>
+>> > The virtio packed virtqueue support patch[1] suggests converting
+>
+>> > endianness by lines:
+>
+>> >
+>
+>> > virtio_tswap16s(vdev, &e->off_wrap);
+>
+>> > virtio_tswap16s(vdev, &e->flags);
+>
+>> >
+>
+>> > Though both of these conversion statements aren't present in the
+>
+>> > latest qemu code here[2]
+>
+>> >
+>
+>> > Is this intentional?
+>
+>>
+>
+>> Good catch!
+>
+>>
+>
+>> It looks like it was removed (maybe by mistake) by commit
+>
+>> d152cdd6f6 ("virtio: use virtio accessor to access packed event")
+>
+>
+>
+>That commit changes from:
+>
+>
+>
+>-    address_space_read_cached(cache, off_off, &e->off_wrap,
+>
+>-                              sizeof(e->off_wrap));
+>
+>-    virtio_tswap16s(vdev, &e->off_wrap);
+>
+>
+>
+>which does a byte read of 2 bytes and then swaps the bytes
+>
+>depending on the host endianness and the value of
+>
+>virtio_access_is_big_endian()
+>
+>
+>
+>to this:
+>
+>
+>
+>+    e->off_wrap = virtio_lduw_phys_cached(vdev, cache, off_off);
+>
+>
+>
+>virtio_lduw_phys_cached() is a small function which calls
+>
+>either lduw_be_phys_cached() or lduw_le_phys_cached()
+>
+>depending on the value of virtio_access_is_big_endian().
+>
+>(And lduw_be_phys_cached() and lduw_le_phys_cached() do
+>
+>the right thing for the host-endianness to do a "load
+>
+>a specifically big or little endian 16-bit value".)
+>
+>
+>
+>Which is to say that because we use a load/store function that's
+>
+>explicit about the size of the data type it is accessing, the
+>
+>function itself can handle doing the load as big or little
+>
+>endian, rather than the calling code having to do a manual swap after
+>
+>it has done a load-as-bag-of-bytes. This is generally preferable
+>
+>as it's less error-prone.
+>
+>
+Thanks for the details!
+>
+>
+So, should we also remove `virtio_tswap16s(vdev, &e->flags);` ?
+>
+>
+I mean:
+>
+diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
+>
+index 893a072c9d..2e5e67bdb9 100644
+>
+--- a/hw/virtio/virtio.c
+>
++++ b/hw/virtio/virtio.c
+>
+@@ -323,7 +323,6 @@ static void vring_packed_event_read(VirtIODevice *vdev,
+>
+/* Make sure flags is seen before off_wrap */
+>
+smp_rmb();
+>
+e->off_wrap = virtio_lduw_phys_cached(vdev, cache, off_off);
+>
+-    virtio_tswap16s(vdev, &e->flags);
+>
+}
+That definitely looks like it's probably not correct...
+
+-- PMM
+
+On Fri, Jun 28, 2024 at 03:53:09PM GMT, Peter Maydell wrote:
+On Tue, 25 Jun 2024 at 08:18, Stefano Garzarella <sgarzare@redhat.com> wrote:
+On Mon, Jun 24, 2024 at 04:19:52PM GMT, Peter Maydell wrote:
+>On Mon, 24 Jun 2024 at 16:11, Stefano Garzarella <sgarzare@redhat.com> wrote:
+>>
+>> CCing Jason.
+>>
+>> On Mon, Jun 24, 2024 at 4:30 PM Xoykie <xoykie@gmail.com> wrote:
+>> >
+>> > The virtio packed virtqueue support patch[1] suggests converting
+>> > endianness by lines:
+>> >
+>> > virtio_tswap16s(vdev, &e->off_wrap);
+>> > virtio_tswap16s(vdev, &e->flags);
+>> >
+>> > Though both of these conversion statements aren't present in the
+>> > latest qemu code here[2]
+>> >
+>> > Is this intentional?
+>>
+>> Good catch!
+>>
+>> It looks like it was removed (maybe by mistake) by commit
+>> d152cdd6f6 ("virtio: use virtio accessor to access packed event")
+>
+>That commit changes from:
+>
+>-    address_space_read_cached(cache, off_off, &e->off_wrap,
+>-                              sizeof(e->off_wrap));
+>-    virtio_tswap16s(vdev, &e->off_wrap);
+>
+>which does a byte read of 2 bytes and then swaps the bytes
+>depending on the host endianness and the value of
+>virtio_access_is_big_endian()
+>
+>to this:
+>
+>+    e->off_wrap = virtio_lduw_phys_cached(vdev, cache, off_off);
+>
+>virtio_lduw_phys_cached() is a small function which calls
+>either lduw_be_phys_cached() or lduw_le_phys_cached()
+>depending on the value of virtio_access_is_big_endian().
+>(And lduw_be_phys_cached() and lduw_le_phys_cached() do
+>the right thing for the host-endianness to do a "load
+>a specifically big or little endian 16-bit value".)
+>
+>Which is to say that because we use a load/store function that's
+>explicit about the size of the data type it is accessing, the
+>function itself can handle doing the load as big or little
+>endian, rather than the calling code having to do a manual swap after
+>it has done a load-as-bag-of-bytes. This is generally preferable
+>as it's less error-prone.
+
+Thanks for the details!
+
+So, should we also remove `virtio_tswap16s(vdev, &e->flags);` ?
+
+I mean:
+diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
+index 893a072c9d..2e5e67bdb9 100644
+--- a/hw/virtio/virtio.c
++++ b/hw/virtio/virtio.c
+@@ -323,7 +323,6 @@ static void vring_packed_event_read(VirtIODevice *vdev,
+      /* Make sure flags is seen before off_wrap */
+      smp_rmb();
+      e->off_wrap = virtio_lduw_phys_cached(vdev, cache, off_off);
+-    virtio_tswap16s(vdev, &e->flags);
+  }
+That definitely looks like it's probably not correct...
+Yeah, I just sent that patch:
+20240701075208.19634-1-sgarzare@redhat.com
+">https://lore.kernel.org/qemu-devel/
+20240701075208.19634-1-sgarzare@redhat.com
+We can continue the discussion there.
+
+Thanks,
+Stefano
+
diff --git a/results/classifier/009/other/55367348 b/results/classifier/009/other/55367348
new file mode 100644
index 000000000..571cb425b
--- /dev/null
+++ b/results/classifier/009/other/55367348
@@ -0,0 +1,542 @@
+other: 0.626
+permissions: 0.595
+device: 0.586
+PID: 0.559
+semantic: 0.555
+performance: 0.546
+graphic: 0.532
+network: 0.518
+debug: 0.516
+socket: 0.501
+files: 0.490
+boot: 0.486
+KVM: 0.470
+vnc: 0.465
+
+[Qemu-devel] [Bug] Docs build fails at interop.rst
+
+https://paste.fedoraproject.org/paste/kOPx4jhtUli---TmxSLrlw
+running python3-sphinx-2.0.1-1.fc31.noarch on Fedora release 31
+(Rawhide)
+
+uname - a
+Linux iouring 5.1.0-0.rc6.git3.1.fc31.x86_64 #1 SMP Thu Apr 25 14:25:32
+UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
+
+Reverting commmit 90edef80a0852cf8a3d2668898ee40e8970e431
+allows for the build to occur
+
+Regards
+Aarushi Mehta
+
+On 5/20/19 7:30 AM, Aarushi Mehta wrote:
+>
+https://paste.fedoraproject.org/paste/kOPx4jhtUli---TmxSLrlw
+>
+running python3-sphinx-2.0.1-1.fc31.noarch on Fedora release 31
+>
+(Rawhide)
+>
+>
+uname - a
+>
+Linux iouring 5.1.0-0.rc6.git3.1.fc31.x86_64 #1 SMP Thu Apr 25 14:25:32
+>
+UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
+>
+>
+Reverting commmit 90edef80a0852cf8a3d2668898ee40e8970e431
+>
+allows for the build to occur
+>
+>
+Regards
+>
+Aarushi Mehta
+>
+>
+Ah, dang. The blocks aren't strictly conforming json, but the version I
+tested this under didn't seem to care. Your version is much newer. (I
+was using 1.7 as provided by Fedora 29.)
+
+For now, try reverting 9e5b6cb87db66dfb606604fe6cf40e5ddf1ef0e7 instead,
+which should at least turn off the "warnings as errors" option, but I
+don't think that reverting -n will turn off this warning.
+
+I'll try to get ahold of this newer version and see if I can't fix it
+more appropriately.
+
+--js
+
+On 5/20/19 12:37 PM, John Snow wrote:
+>
+>
+>
+On 5/20/19 7:30 AM, Aarushi Mehta wrote:
+>
+>
+https://paste.fedoraproject.org/paste/kOPx4jhtUli---TmxSLrlw
+>
+> running python3-sphinx-2.0.1-1.fc31.noarch on Fedora release 31
+>
+> (Rawhide)
+>
+>
+>
+> uname - a
+>
+> Linux iouring 5.1.0-0.rc6.git3.1.fc31.x86_64 #1 SMP Thu Apr 25 14:25:32
+>
+> UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
+>
+>
+>
+> Reverting commmit 90edef80a0852cf8a3d2668898ee40e8970e431
+>
+> allows for the build to occur
+>
+>
+>
+> Regards
+>
+> Aarushi Mehta
+>
+>
+>
+>
+>
+>
+Ah, dang. The blocks aren't strictly conforming json, but the version I
+>
+tested this under didn't seem to care. Your version is much newer. (I
+>
+was using 1.7 as provided by Fedora 29.)
+>
+>
+For now, try reverting 9e5b6cb87db66dfb606604fe6cf40e5ddf1ef0e7 instead,
+>
+which should at least turn off the "warnings as errors" option, but I
+>
+don't think that reverting -n will turn off this warning.
+>
+>
+I'll try to get ahold of this newer version and see if I can't fix it
+>
+more appropriately.
+>
+>
+--js
+>
+...Sigh, okay.
+
+So, I am still not actually sure what changed from pygments 2.2 and
+sphinx 1.7 to pygments 2.4 and sphinx 2.0.1, but it appears as if Sphinx
+by default always tries to do add a filter to the pygments lexer that
+raises an error on highlighting failure, instead of the default behavior
+which is to just highlight those errors in the output. There is no
+option to Sphinx that I am aware of to retain this lexing behavior.
+(Effectively, it's strict or nothing.)
+
+This approach, apparently, is broken in Sphinx 1.7/Pygments 2.2, so the
+build works with our malformed json.
+
+There are a few options:
+
+1. Update conf.py to ignore these warnings (and all future lexing
+errors), and settle for the fact that there will be no QMP highlighting
+wherever we use the directionality indicators ('->', '<-').
+
+2. Update bitmaps.rst to remove the directionality indicators.
+
+3. Update bitmaps.rst to format the QMP blocks as raw text instead of JSON.
+
+4. Update bitmaps.rst to remove the "json" specification from the code
+block. This will cause sphinx to "guess" the formatting, and the
+pygments guesser will decide it's Python3.
+
+This will parse well enough, but will mis-highlight 'true' and 'false'
+which are not python keywords. This approach may break in the future if
+the Python3 lexer is upgraded to be stricter (because '->' and '<-' are
+still invalid), and leaves us at the mercy of both the guesser and the
+lexer.
+
+I'm not actually sure what I dislike the least; I think I dislike #1 the
+most. #4 gets us most of what we want but is perhaps porcelain.
+
+I suspect if we attempt to move more of our documentation to ReST and
+Sphinx that we will need to answer for ourselves how we intend to
+document QMP code flow examples.
+
+--js
+
+On Mon, May 20, 2019 at 05:25:28PM -0400, John Snow wrote:
+>
+>
+>
+On 5/20/19 12:37 PM, John Snow wrote:
+>
+>
+>
+>
+>
+> On 5/20/19 7:30 AM, Aarushi Mehta wrote:
+>
+>>
+https://paste.fedoraproject.org/paste/kOPx4jhtUli---TmxSLrlw
+>
+>> running python3-sphinx-2.0.1-1.fc31.noarch on Fedora release 31
+>
+>> (Rawhide)
+>
+>>
+>
+>> uname - a
+>
+>> Linux iouring 5.1.0-0.rc6.git3.1.fc31.x86_64 #1 SMP Thu Apr 25 14:25:32
+>
+>> UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
+>
+>>
+>
+>> Reverting commmit 90edef80a0852cf8a3d2668898ee40e8970e431
+>
+>> allows for the build to occur
+>
+>>
+>
+>> Regards
+>
+>> Aarushi Mehta
+>
+>>
+>
+>>
+>
+>
+>
+> Ah, dang. The blocks aren't strictly conforming json, but the version I
+>
+> tested this under didn't seem to care. Your version is much newer. (I
+>
+> was using 1.7 as provided by Fedora 29.)
+>
+>
+>
+> For now, try reverting 9e5b6cb87db66dfb606604fe6cf40e5ddf1ef0e7 instead,
+>
+> which should at least turn off the "warnings as errors" option, but I
+>
+> don't think that reverting -n will turn off this warning.
+>
+>
+>
+> I'll try to get ahold of this newer version and see if I can't fix it
+>
+> more appropriately.
+>
+>
+>
+> --js
+>
+>
+>
+>
+...Sigh, okay.
+>
+>
+So, I am still not actually sure what changed from pygments 2.2 and
+>
+sphinx 1.7 to pygments 2.4 and sphinx 2.0.1, but it appears as if Sphinx
+>
+by default always tries to do add a filter to the pygments lexer that
+>
+raises an error on highlighting failure, instead of the default behavior
+>
+which is to just highlight those errors in the output. There is no
+>
+option to Sphinx that I am aware of to retain this lexing behavior.
+>
+(Effectively, it's strict or nothing.)
+>
+>
+This approach, apparently, is broken in Sphinx 1.7/Pygments 2.2, so the
+>
+build works with our malformed json.
+>
+>
+There are a few options:
+>
+>
+1. Update conf.py to ignore these warnings (and all future lexing
+>
+errors), and settle for the fact that there will be no QMP highlighting
+>
+wherever we use the directionality indicators ('->', '<-').
+>
+>
+2. Update bitmaps.rst to remove the directionality indicators.
+>
+>
+3. Update bitmaps.rst to format the QMP blocks as raw text instead of JSON.
+>
+>
+4. Update bitmaps.rst to remove the "json" specification from the code
+>
+block. This will cause sphinx to "guess" the formatting, and the
+>
+pygments guesser will decide it's Python3.
+>
+>
+This will parse well enough, but will mis-highlight 'true' and 'false'
+>
+which are not python keywords. This approach may break in the future if
+>
+the Python3 lexer is upgraded to be stricter (because '->' and '<-' are
+>
+still invalid), and leaves us at the mercy of both the guesser and the
+>
+lexer.
+>
+>
+I'm not actually sure what I dislike the least; I think I dislike #1 the
+>
+most. #4 gets us most of what we want but is perhaps porcelain.
+>
+>
+I suspect if we attempt to move more of our documentation to ReST and
+>
+Sphinx that we will need to answer for ourselves how we intend to
+>
+document QMP code flow examples.
+Writing a custom lexer that handles "<-" and "->" was simple (see below).
+
+Now, is it possible to convince Sphinx to register and use a custom lexer?
+
+$ cat > /tmp/lexer.py <<EOF
+from pygments.lexer import RegexLexer, DelegatingLexer
+from pygments.lexers.data import JsonLexer
+import re
+from pygments.token import *
+
+class QMPExampleMarkersLexer(RegexLexer):
+    tokens = {
+        'root': [
+            (r' *-> *', Generic.Prompt),
+            (r' *<- *', Generic.Output),
+        ]
+    }
+
+class QMPExampleLexer(DelegatingLexer):
+    def __init__(self, **options):
+        super(QMPExampleLexer, self).__init__(JsonLexer, 
+QMPExampleMarkersLexer, Error, **options)
+EOF
+$ pygmentize -l /tmp/lexer.py:QMPExampleLexer -x -f html <<EOF
+    -> {
+         "execute": "drive-backup",
+         "arguments": {
+           "device": "drive0",
+           "bitmap": "bitmap0",
+           "target": "drive0.inc0.qcow2",
+           "format": "qcow2",
+           "sync": "incremental",
+           "mode": "existing"
+         }
+       }
+
+    <- { "return": {} }
+EOF
+<div class="highlight"><pre><span></span><span class="gp">    -&gt; 
+</span><span class="p">{</span>
+         <span class="nt">&quot;execute&quot;</span><span class="p">:</span> 
+<span class="s2">&quot;drive-backup&quot;</span><span class="p">,</span>
+         <span class="nt">&quot;arguments&quot;</span><span class="p">:</span> 
+<span class="p">{</span>
+           <span class="nt">&quot;device&quot;</span><span class="p">:</span> 
+<span class="s2">&quot;drive0&quot;</span><span class="p">,</span>
+           <span class="nt">&quot;bitmap&quot;</span><span class="p">:</span> 
+<span class="s2">&quot;bitmap0&quot;</span><span class="p">,</span>
+           <span class="nt">&quot;target&quot;</span><span class="p">:</span> 
+<span class="s2">&quot;drive0.inc0.qcow2&quot;</span><span class="p">,</span>
+           <span class="nt">&quot;format&quot;</span><span class="p">:</span> 
+<span class="s2">&quot;qcow2&quot;</span><span class="p">,</span>
+           <span class="nt">&quot;sync&quot;</span><span class="p">:</span> 
+<span class="s2">&quot;incremental&quot;</span><span class="p">,</span>
+           <span class="nt">&quot;mode&quot;</span><span class="p">:</span> 
+<span class="s2">&quot;existing&quot;</span>
+         <span class="p">}</span>
+       <span class="p">}</span>
+
+<span class="go">    &lt;- </span><span class="p">{</span> <span 
+class="nt">&quot;return&quot;</span><span class="p">:</span> <span 
+class="p">{}</span> <span class="p">}</span>
+</pre></div>
+$ 
+
+
+-- 
+Eduardo
+
+On 5/20/19 7:04 PM, Eduardo Habkost wrote:
+>
+On Mon, May 20, 2019 at 05:25:28PM -0400, John Snow wrote:
+>
+>
+>
+>
+>
+> On 5/20/19 12:37 PM, John Snow wrote:
+>
+>>
+>
+>>
+>
+>> On 5/20/19 7:30 AM, Aarushi Mehta wrote:
+>
+>>>
+https://paste.fedoraproject.org/paste/kOPx4jhtUli---TmxSLrlw
+>
+>>> running python3-sphinx-2.0.1-1.fc31.noarch on Fedora release 31
+>
+>>> (Rawhide)
+>
+>>>
+>
+>>> uname - a
+>
+>>> Linux iouring 5.1.0-0.rc6.git3.1.fc31.x86_64 #1 SMP Thu Apr 25 14:25:32
+>
+>>> UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
+>
+>>>
+>
+>>> Reverting commmit 90edef80a0852cf8a3d2668898ee40e8970e431
+>
+>>> allows for the build to occur
+>
+>>>
+>
+>>> Regards
+>
+>>> Aarushi Mehta
+>
+>>>
+>
+>>>
+>
+>>
+>
+>> Ah, dang. The blocks aren't strictly conforming json, but the version I
+>
+>> tested this under didn't seem to care. Your version is much newer. (I
+>
+>> was using 1.7 as provided by Fedora 29.)
+>
+>>
+>
+>> For now, try reverting 9e5b6cb87db66dfb606604fe6cf40e5ddf1ef0e7 instead,
+>
+>> which should at least turn off the "warnings as errors" option, but I
+>
+>> don't think that reverting -n will turn off this warning.
+>
+>>
+>
+>> I'll try to get ahold of this newer version and see if I can't fix it
+>
+>> more appropriately.
+>
+>>
+>
+>> --js
+>
+>>
+>
+>
+>
+> ...Sigh, okay.
+>
+>
+>
+> So, I am still not actually sure what changed from pygments 2.2 and
+>
+> sphinx 1.7 to pygments 2.4 and sphinx 2.0.1, but it appears as if Sphinx
+>
+> by default always tries to do add a filter to the pygments lexer that
+>
+> raises an error on highlighting failure, instead of the default behavior
+>
+> which is to just highlight those errors in the output. There is no
+>
+> option to Sphinx that I am aware of to retain this lexing behavior.
+>
+> (Effectively, it's strict or nothing.)
+>
+>
+>
+> This approach, apparently, is broken in Sphinx 1.7/Pygments 2.2, so the
+>
+> build works with our malformed json.
+>
+>
+>
+> There are a few options:
+>
+>
+>
+> 1. Update conf.py to ignore these warnings (and all future lexing
+>
+> errors), and settle for the fact that there will be no QMP highlighting
+>
+> wherever we use the directionality indicators ('->', '<-').
+>
+>
+>
+> 2. Update bitmaps.rst to remove the directionality indicators.
+>
+>
+>
+> 3. Update bitmaps.rst to format the QMP blocks as raw text instead of JSON.
+>
+>
+>
+> 4. Update bitmaps.rst to remove the "json" specification from the code
+>
+> block. This will cause sphinx to "guess" the formatting, and the
+>
+> pygments guesser will decide it's Python3.
+>
+>
+>
+> This will parse well enough, but will mis-highlight 'true' and 'false'
+>
+> which are not python keywords. This approach may break in the future if
+>
+> the Python3 lexer is upgraded to be stricter (because '->' and '<-' are
+>
+> still invalid), and leaves us at the mercy of both the guesser and the
+>
+> lexer.
+>
+>
+>
+> I'm not actually sure what I dislike the least; I think I dislike #1 the
+>
+> most. #4 gets us most of what we want but is perhaps porcelain.
+>
+>
+>
+> I suspect if we attempt to move more of our documentation to ReST and
+>
+> Sphinx that we will need to answer for ourselves how we intend to
+>
+> document QMP code flow examples.
+>
+>
+Writing a custom lexer that handles "<-" and "->" was simple (see below).
+>
+>
+Now, is it possible to convince Sphinx to register and use a custom lexer?
+>
+Spoilers, yes, and I've sent a patch to list. Thanks for your help!
+
diff --git a/results/classifier/009/other/55753058 b/results/classifier/009/other/55753058
new file mode 100644
index 000000000..1e6b20aa7
--- /dev/null
+++ b/results/classifier/009/other/55753058
@@ -0,0 +1,303 @@
+other: 0.734
+KVM: 0.713
+vnc: 0.682
+graphic: 0.630
+device: 0.623
+debug: 0.611
+performance: 0.591
+permissions: 0.580
+semantic: 0.577
+network: 0.525
+PID: 0.512
+boot: 0.478
+socket: 0.462
+files: 0.459
+
+[RESEND][BUG FIX HELP] QEMU main thread endlessly hangs in __ppoll()
+
+Hi Genius,
+I am a user of QEMU v4.2.0 and stuck in an interesting bug, which may still
+exist in the mainline.
+Thanks in advance to heroes who can take a look and share understanding.
+
+The qemu main thread endlessly hangs in the handle of the qmp statement:
+{'execute': 'human-monitor-command', 'arguments':{ 'command-line':
+'drive_del replication0' } }
+and we have the call trace looks like:
+#0 0x00007f3c22045bf6 in __ppoll (fds=0x555611328410, nfds=1,
+timeout=<optimized out>, timeout@entry=0x7ffc56c66db0,
+sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:44
+#1 0x000055561021f415 in ppoll (__ss=0x0, __timeout=0x7ffc56c66db0,
+__nfds=<optimized out>, __fds=<optimized out>)
+at /usr/include/x86_64-linux-gnu/bits/poll2.h:77
+#2 qemu_poll_ns (fds=<optimized out>, nfds=<optimized out>,
+timeout=<optimized out>) at util/qemu-timer.c:348
+#3 0x0000555610221430 in aio_poll (ctx=ctx@entry=0x5556113010f0,
+blocking=blocking@entry=true) at util/aio-posix.c:669
+#4 0x000055561019268d in bdrv_do_drained_begin (poll=true,
+ignore_bds_parents=false, parent=0x0, recursive=false,
+bs=0x55561138b0a0) at block/io.c:430
+#5 bdrv_do_drained_begin (bs=0x55561138b0a0, recursive=<optimized out>,
+parent=0x0, ignore_bds_parents=<optimized out>,
+poll=<optimized out>) at block/io.c:396
+#6 0x000055561017b60b in quorum_del_child (bs=0x55561138b0a0,
+child=0x7f36dc0ce380, errp=<optimized out>)
+at block/quorum.c:1063
+#7 0x000055560ff5836b in qmp_x_blockdev_change (parent=0x555612373120
+"colo-disk0", has_child=<optimized out>,
+child=0x5556112df3e0 "children.1", has_node=<optimized out>, node=0x0,
+errp=0x7ffc56c66f98) at blockdev.c:4494
+#8 0x00005556100f8f57 in qmp_marshal_x_blockdev_change (args=<optimized
+out>, ret=<optimized out>, errp=0x7ffc56c67018)
+at qapi/qapi-commands-block-core.c:1538
+#9 0x00005556101d8290 in do_qmp_dispatch (errp=0x7ffc56c67010,
+allow_oob=<optimized out>, request=<optimized out>,
+cmds=0x5556109c69a0 <qmp_commands>) at qapi/qmp-dispatch.c:132
+#10 qmp_dispatch (cmds=0x5556109c69a0 <qmp_commands>, request=<optimized
+out>, allow_oob=<optimized out>)
+at qapi/qmp-dispatch.c:175
+#11 0x00005556100d4c4d in monitor_qmp_dispatch (mon=0x5556113a6f40,
+req=<optimized out>) at monitor/qmp.c:145
+#12 0x00005556100d5437 in monitor_qmp_bh_dispatcher (data=<optimized out>)
+at monitor/qmp.c:234
+#13 0x000055561021dbec in aio_bh_call (bh=0x5556112164bGrateful0) at
+util/async.c:117
+#14 aio_bh_poll (ctx=ctx@entry=0x5556112151b0) at util/async.c:117
+#15 0x00005556102212c4 in aio_dispatch (ctx=0x5556112151b0) at
+util/aio-posix.c:459
+#16 0x000055561021dab2 in aio_ctx_dispatch (source=<optimized out>,
+callback=<optimized out>, user_data=<optimized out>)
+at util/async.c:260
+#17 0x00007f3c22302fbd in g_main_context_dispatch () from
+/lib/x86_64-linux-gnu/libglib-2.0.so.0
+#18 0x0000555610220358 in glib_pollfds_poll () at util/main-loop.c:219
+#19 os_host_main_loop_wait (timeout=<optimized out>) at util/main-loop.c:242
+#20 main_loop_wait (nonblocking=<optimized out>) at util/main-loop.c:518
+#21 0x000055560ff600fe in main_loop () at vl.c:1814
+#22 0x000055560fddbce9 in main (argc=<optimized out>, argv=<optimized out>,
+envp=<optimized out>) at vl.c:4503
+We found that we're doing endless check in the line of
+block/io.c:bdrv_do_drained_begin():
+BDRV_POLL_WHILE(bs, bdrv_drain_poll_top_level(bs, recursive, parent));
+and it turns out that the bdrv_drain_poll() always get true from:
+- bdrv_parent_drained_poll(bs, ignore_parent, ignore_bds_parents)
+- AND atomic_read(&bs->in_flight)
+
+I personally think this is a deadlock issue in the a QEMU block layer
+(as we know, we have some #FIXME comments in related codes, such as block
+permisson update).
+Any comments are welcome and appreciated.
+
+---
+thx,likexu
+
+On 2/28/21 9:39 PM, Like Xu wrote:
+Hi Genius,
+I am a user of QEMU v4.2.0 and stuck in an interesting bug, which may
+still exist in the mainline.
+Thanks in advance to heroes who can take a look and share understanding.
+Do you have a test case that reproduces on 5.2? It'd be nice to know if
+it was still a problem in the latest source tree or not.
+--js
+The qemu main thread endlessly hangs in the handle of the qmp statement:
+{'execute': 'human-monitor-command', 'arguments':{ 'command-line':
+'drive_del replication0' } }
+and we have the call trace looks like:
+#0 0x00007f3c22045bf6 in __ppoll (fds=0x555611328410, nfds=1,
+timeout=<optimized out>, timeout@entry=0x7ffc56c66db0,
+sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:44
+#1 0x000055561021f415 in ppoll (__ss=0x0, __timeout=0x7ffc56c66db0,
+__nfds=<optimized out>, __fds=<optimized out>)
+at /usr/include/x86_64-linux-gnu/bits/poll2.h:77
+#2 qemu_poll_ns (fds=<optimized out>, nfds=<optimized out>,
+timeout=<optimized out>) at util/qemu-timer.c:348
+#3 0x0000555610221430 in aio_poll (ctx=ctx@entry=0x5556113010f0,
+blocking=blocking@entry=true) at util/aio-posix.c:669
+#4 0x000055561019268d in bdrv_do_drained_begin (poll=true,
+ignore_bds_parents=false, parent=0x0, recursive=false,
+bs=0x55561138b0a0) at block/io.c:430
+#5 bdrv_do_drained_begin (bs=0x55561138b0a0, recursive=<optimized out>,
+parent=0x0, ignore_bds_parents=<optimized out>,
+poll=<optimized out>) at block/io.c:396
+#6 0x000055561017b60b in quorum_del_child (bs=0x55561138b0a0,
+child=0x7f36dc0ce380, errp=<optimized out>)
+at block/quorum.c:1063
+#7 0x000055560ff5836b in qmp_x_blockdev_change (parent=0x555612373120
+"colo-disk0", has_child=<optimized out>,
+child=0x5556112df3e0 "children.1", has_node=<optimized out>, node=0x0,
+errp=0x7ffc56c66f98) at blockdev.c:4494
+#8 0x00005556100f8f57 in qmp_marshal_x_blockdev_change (args=<optimized
+out>, ret=<optimized out>, errp=0x7ffc56c67018)
+at qapi/qapi-commands-block-core.c:1538
+#9 0x00005556101d8290 in do_qmp_dispatch (errp=0x7ffc56c67010,
+allow_oob=<optimized out>, request=<optimized out>,
+cmds=0x5556109c69a0 <qmp_commands>) at qapi/qmp-dispatch.c:132
+#10 qmp_dispatch (cmds=0x5556109c69a0 <qmp_commands>, request=<optimized
+out>, allow_oob=<optimized out>)
+at qapi/qmp-dispatch.c:175
+#11 0x00005556100d4c4d in monitor_qmp_dispatch (mon=0x5556113a6f40,
+req=<optimized out>) at monitor/qmp.c:145
+#12 0x00005556100d5437 in monitor_qmp_bh_dispatcher (data=<optimized
+out>) at monitor/qmp.c:234
+#13 0x000055561021dbec in aio_bh_call (bh=0x5556112164bGrateful0) at
+util/async.c:117
+#14 aio_bh_poll (ctx=ctx@entry=0x5556112151b0) at util/async.c:117
+#15 0x00005556102212c4 in aio_dispatch (ctx=0x5556112151b0) at
+util/aio-posix.c:459
+#16 0x000055561021dab2 in aio_ctx_dispatch (source=<optimized out>,
+callback=<optimized out>, user_data=<optimized out>)
+at util/async.c:260
+#17 0x00007f3c22302fbd in g_main_context_dispatch () from
+/lib/x86_64-linux-gnu/libglib-2.0.so.0
+#18 0x0000555610220358 in glib_pollfds_poll () at util/main-loop.c:219
+#19 os_host_main_loop_wait (timeout=<optimized out>) at
+util/main-loop.c:242
+#20 main_loop_wait (nonblocking=<optimized out>) at util/main-loop.c:518
+#21 0x000055560ff600fe in main_loop () at vl.c:1814
+#22 0x000055560fddbce9 in main (argc=<optimized out>, argv=<optimized
+out>, envp=<optimized out>) at vl.c:4503
+We found that we're doing endless check in the line of
+block/io.c:bdrv_do_drained_begin():
+    BDRV_POLL_WHILE(bs, bdrv_drain_poll_top_level(bs, recursive, parent));
+and it turns out that the bdrv_drain_poll() always get true from:
+- bdrv_parent_drained_poll(bs, ignore_parent, ignore_bds_parents)
+- AND atomic_read(&bs->in_flight)
+
+I personally think this is a deadlock issue in the a QEMU block layer
+(as we know, we have some #FIXME comments in related codes, such as
+block permisson update).
+Any comments are welcome and appreciated.
+
+---
+thx,likexu
+
+Hi John,
+
+Thanks for your comment.
+
+On 2021/3/5 7:53, John Snow wrote:
+On 2/28/21 9:39 PM, Like Xu wrote:
+Hi Genius,
+I am a user of QEMU v4.2.0 and stuck in an interesting bug, which may
+still exist in the mainline.
+Thanks in advance to heroes who can take a look and share understanding.
+Do you have a test case that reproduces on 5.2? It'd be nice to know if it
+was still a problem in the latest source tree or not.
+We narrowed down the source of the bug, which basically came from
+the following qmp usage:
+{'execute': 'human-monitor-command', 'arguments':{ 'command-line':
+'drive_del replication0' } }
+One of the test cases is the COLO usage (docs/colo-proxy.txt).
+
+This issue is sporadic,the probability may be 1/15 for a io-heavy guest.
+
+I believe it's reproducible on 5.2 and the latest tree.
+--js
+The qemu main thread endlessly hangs in the handle of the qmp statement:
+{'execute': 'human-monitor-command', 'arguments':{ 'command-line':
+'drive_del replication0' } }
+and we have the call trace looks like:
+#0 0x00007f3c22045bf6 in __ppoll (fds=0x555611328410, nfds=1,
+timeout=<optimized out>, timeout@entry=0x7ffc56c66db0,
+sigmask=sigmask@entry=0x0) at ../sysdeps/unix/sysv/linux/ppoll.c:44
+#1 0x000055561021f415 in ppoll (__ss=0x0, __timeout=0x7ffc56c66db0,
+__nfds=<optimized out>, __fds=<optimized out>)
+at /usr/include/x86_64-linux-gnu/bits/poll2.h:77
+#2 qemu_poll_ns (fds=<optimized out>, nfds=<optimized out>,
+timeout=<optimized out>) at util/qemu-timer.c:348
+#3 0x0000555610221430 in aio_poll (ctx=ctx@entry=0x5556113010f0,
+blocking=blocking@entry=true) at util/aio-posix.c:669
+#4 0x000055561019268d in bdrv_do_drained_begin (poll=true,
+ignore_bds_parents=false, parent=0x0, recursive=false,
+bs=0x55561138b0a0) at block/io.c:430
+#5 bdrv_do_drained_begin (bs=0x55561138b0a0, recursive=<optimized out>,
+parent=0x0, ignore_bds_parents=<optimized out>,
+poll=<optimized out>) at block/io.c:396
+#6 0x000055561017b60b in quorum_del_child (bs=0x55561138b0a0,
+child=0x7f36dc0ce380, errp=<optimized out>)
+at block/quorum.c:1063
+#7 0x000055560ff5836b in qmp_x_blockdev_change (parent=0x555612373120
+"colo-disk0", has_child=<optimized out>,
+child=0x5556112df3e0 "children.1", has_node=<optimized out>, node=0x0,
+errp=0x7ffc56c66f98) at blockdev.c:4494
+#8 0x00005556100f8f57 in qmp_marshal_x_blockdev_change (args=<optimized
+out>, ret=<optimized out>, errp=0x7ffc56c67018)
+at qapi/qapi-commands-block-core.c:1538
+#9 0x00005556101d8290 in do_qmp_dispatch (errp=0x7ffc56c67010,
+allow_oob=<optimized out>, request=<optimized out>,
+cmds=0x5556109c69a0 <qmp_commands>) at qapi/qmp-dispatch.c:132
+#10 qmp_dispatch (cmds=0x5556109c69a0 <qmp_commands>, request=<optimized
+out>, allow_oob=<optimized out>)
+at qapi/qmp-dispatch.c:175
+#11 0x00005556100d4c4d in monitor_qmp_dispatch (mon=0x5556113a6f40,
+req=<optimized out>) at monitor/qmp.c:145
+#12 0x00005556100d5437 in monitor_qmp_bh_dispatcher (data=<optimized
+out>) at monitor/qmp.c:234
+#13 0x000055561021dbec in aio_bh_call (bh=0x5556112164bGrateful0) at
+util/async.c:117
+#14 aio_bh_poll (ctx=ctx@entry=0x5556112151b0) at util/async.c:117
+#15 0x00005556102212c4 in aio_dispatch (ctx=0x5556112151b0) at
+util/aio-posix.c:459
+#16 0x000055561021dab2 in aio_ctx_dispatch (source=<optimized out>,
+callback=<optimized out>, user_data=<optimized out>)
+at util/async.c:260
+#17 0x00007f3c22302fbd in g_main_context_dispatch () from
+/lib/x86_64-linux-gnu/libglib-2.0.so.0
+#18 0x0000555610220358 in glib_pollfds_poll () at util/main-loop.c:219
+#19 os_host_main_loop_wait (timeout=<optimized out>) at util/main-loop.c:242
+#20 main_loop_wait (nonblocking=<optimized out>) at util/main-loop.c:518
+#21 0x000055560ff600fe in main_loop () at vl.c:1814
+#22 0x000055560fddbce9 in main (argc=<optimized out>, argv=<optimized
+out>, envp=<optimized out>) at vl.c:4503
+We found that we're doing endless check in the line of
+block/io.c:bdrv_do_drained_begin():
+     BDRV_POLL_WHILE(bs, bdrv_drain_poll_top_level(bs, recursive, parent));
+and it turns out that the bdrv_drain_poll() always get true from:
+- bdrv_parent_drained_poll(bs, ignore_parent, ignore_bds_parents)
+- AND atomic_read(&bs->in_flight)
+
+I personally think this is a deadlock issue in the a QEMU block layer
+(as we know, we have some #FIXME comments in related codes, such as block
+permisson update).
+Any comments are welcome and appreciated.
+
+---
+thx,likexu
+
+On 3/4/21 10:08 PM, Like Xu wrote:
+Hi John,
+
+Thanks for your comment.
+
+On 2021/3/5 7:53, John Snow wrote:
+On 2/28/21 9:39 PM, Like Xu wrote:
+Hi Genius,
+I am a user of QEMU v4.2.0 and stuck in an interesting bug, which may
+still exist in the mainline.
+Thanks in advance to heroes who can take a look and share understanding.
+Do you have a test case that reproduces on 5.2? It'd be nice to know
+if it was still a problem in the latest source tree or not.
+We narrowed down the source of the bug, which basically came from
+the following qmp usage:
+{'execute': 'human-monitor-command', 'arguments':{ 'command-line':
+'drive_del replication0' } }
+One of the test cases is the COLO usage (docs/colo-proxy.txt).
+
+This issue is sporadic,the probability may be 1/15 for a io-heavy guest.
+
+I believe it's reproducible on 5.2 and the latest tree.
+Can you please test and confirm that this is the case, and then file a
+bug report on the LP:
+https://launchpad.net/qemu
+and include:
+- The exact commit you used (current origin/master debug build would be
+the most ideal.)
+- Which QEMU binary you are using (qemu-system-x86_64?)
+- The shortest command line you are aware of that reproduces the problem
+- The host OS and kernel version
+- An updated call trace
+- Any relevant commands issued prior to the one that caused the hang; or
+detailed reproduction steps if possible.
+Thanks,
+--js
+
diff --git a/results/classifier/009/other/56309929 b/results/classifier/009/other/56309929
new file mode 100644
index 000000000..54419195f
--- /dev/null
+++ b/results/classifier/009/other/56309929
@@ -0,0 +1,190 @@
+other: 0.690
+device: 0.646
+KVM: 0.636
+performance: 0.608
+vnc: 0.600
+network: 0.589
+permissions: 0.587
+debug: 0.585
+boot: 0.578
+graphic: 0.570
+PID: 0.561
+semantic: 0.521
+socket: 0.516
+files: 0.311
+
+[Qemu-devel] [BUG 2.6] Broken CONFIG_TPM?
+
+A compilation test with clang -Weverything reported this problem:
+
+config-host.h:112:20: warning: '$' in identifier
+[-Wdollar-in-identifier-extension]
+
+The line of code looks like this:
+
+#define CONFIG_TPM $(CONFIG_SOFTMMU)
+
+This is fine for Makefile code, but won't work as expected in C code.
+
+Am 28.04.2016 um 22:33 schrieb Stefan Weil:
+>
+A compilation test with clang -Weverything reported this problem:
+>
+>
+config-host.h:112:20: warning: '$' in identifier
+>
+[-Wdollar-in-identifier-extension]
+>
+>
+The line of code looks like this:
+>
+>
+#define CONFIG_TPM $(CONFIG_SOFTMMU)
+>
+>
+This is fine for Makefile code, but won't work as expected in C code.
+>
+A complete 64 bit build with clang -Weverything creates a log file of
+1.7 GB.
+Here are the uniq warnings sorted by their frequency:
+
+      1 -Wflexible-array-extensions
+      1 -Wgnu-folding-constant
+      1 -Wunknown-pragmas
+      1 -Wunknown-warning-option
+      1 -Wunreachable-code-loop-increment
+      2 -Warray-bounds-pointer-arithmetic
+      2 -Wdollar-in-identifier-extension
+      3 -Woverlength-strings
+      3 -Wweak-vtables
+      4 -Wgnu-empty-struct
+      4 -Wstring-conversion
+      6 -Wclass-varargs
+      7 -Wc99-extensions
+      7 -Wc++-compat
+      8 -Wfloat-equal
+     11 -Wformat-nonliteral
+     16 -Wshift-negative-value
+     19 -Wglobal-constructors
+     28 -Wc++11-long-long
+     29 -Wembedded-directive
+     38 -Wvla
+     40 -Wcovered-switch-default
+     40 -Wmissing-variable-declarations
+     49 -Wold-style-cast
+     53 -Wgnu-conditional-omitted-operand
+     56 -Wformat-pedantic
+     61 -Wvariadic-macros
+     77 -Wc++11-extensions
+     83 -Wgnu-flexible-array-initializer
+     83 -Wzero-length-array
+     96 -Wgnu-designator
+    102 -Wmissing-noreturn
+    103 -Wconditional-uninitialized
+    107 -Wdisabled-macro-expansion
+    115 -Wunreachable-code-return
+    134 -Wunreachable-code
+    243 -Wunreachable-code-break
+    257 -Wfloat-conversion
+    280 -Wswitch-enum
+    291 -Wpointer-arith
+    298 -Wshadow
+    378 -Wassign-enum
+    395 -Wused-but-marked-unused
+    420 -Wreserved-id-macro
+    493 -Wdocumentation
+    510 -Wshift-sign-overflow
+    565 -Wgnu-case-range
+    566 -Wgnu-zero-variadic-macro-arguments
+    650 -Wbad-function-cast
+    705 -Wmissing-field-initializers
+    817 -Wgnu-statement-expression
+    968 -Wdocumentation-unknown-command
+   1021 -Wextra-semi
+   1112 -Wgnu-empty-initializer
+   1138 -Wcast-qual
+   1509 -Wcast-align
+   1766 -Wextended-offsetof
+   1937 -Wsign-compare
+   2130 -Wpacked
+   2404 -Wunused-macros
+   3081 -Wpadded
+   4182 -Wconversion
+   5430 -Wlanguage-extension-token
+   6655 -Wshorten-64-to-32
+   6995 -Wpedantic
+   7354 -Wunused-parameter
+  27659 -Wsign-conversion
+
+Stefan Weil <address@hidden> writes:
+
+>
+A compilation test with clang -Weverything reported this problem:
+>
+>
+config-host.h:112:20: warning: '$' in identifier
+>
+[-Wdollar-in-identifier-extension]
+>
+>
+The line of code looks like this:
+>
+>
+#define CONFIG_TPM $(CONFIG_SOFTMMU)
+>
+>
+This is fine for Makefile code, but won't work as expected in C code.
+Broken in commit 3b8acc1 "configure: fix TPM logic".  Cc'ing Paolo.
+
+Impact: #ifdef CONFIG_TPM never disables code.  There are no other uses
+of CONFIG_TPM in C code.
+
+I had a quick peek at configure and create_config, but refrained from
+attempting to fix this, since I don't understand when exactly CONFIG_TPM
+should be defined.
+
+On 29 April 2016 at 08:42, Markus Armbruster <address@hidden> wrote:
+>
+Stefan Weil <address@hidden> writes:
+>
+>
+> A compilation test with clang -Weverything reported this problem:
+>
+>
+>
+> config-host.h:112:20: warning: '$' in identifier
+>
+> [-Wdollar-in-identifier-extension]
+>
+>
+>
+> The line of code looks like this:
+>
+>
+>
+> #define CONFIG_TPM $(CONFIG_SOFTMMU)
+>
+>
+>
+> This is fine for Makefile code, but won't work as expected in C code.
+>
+>
+Broken in commit 3b8acc1 "configure: fix TPM logic".  Cc'ing Paolo.
+>
+>
+Impact: #ifdef CONFIG_TPM never disables code.  There are no other uses
+>
+of CONFIG_TPM in C code.
+>
+>
+I had a quick peek at configure and create_config, but refrained from
+>
+attempting to fix this, since I don't understand when exactly CONFIG_TPM
+>
+should be defined.
+Looking at 'git blame' suggests this has been wrong like this for
+some years, so we don't need to scramble to fix it for 2.6.
+
+thanks
+-- PMM
+
diff --git a/results/classifier/009/other/56937788 b/results/classifier/009/other/56937788
new file mode 100644
index 000000000..026b838bf
--- /dev/null
+++ b/results/classifier/009/other/56937788
@@ -0,0 +1,354 @@
+other: 0.791
+KVM: 0.755
+vnc: 0.743
+debug: 0.723
+graphic: 0.720
+semantic: 0.705
+device: 0.697
+performance: 0.692
+permissions: 0.685
+files: 0.680
+boot: 0.636
+network: 0.633
+PID: 0.620
+socket: 0.613
+
+[Qemu-devel] [Bug] virtio-blk: qemu will crash if hotplug virtio-blk device failed
+
+I found that hotplug virtio-blk device will lead to qemu crash.
+
+Re-production steps:
+
+1.       Run VM named vm001
+
+2.       Create a virtio-blk.xml which contains wrong configurations:
+<disk device="lun" rawio="yes" type="block">
+  <driver cache="none" io="native" name="qemu" type="raw" />
+  <source dev="/dev/mapper/11-dm" />
+  <target bus="virtio" dev="vdx" />
+</disk>
+
+3.       Run command : virsh attach-device vm001 vm001
+
+Libvirt will return err msg:
+
+error: Failed to attach device from blk-scsi.xml
+
+error: internal error: unable to execute QEMU command 'device_add': Please set 
+scsi=off for virtio-blk devices in order to use virtio 1.0
+
+it means hotplug virtio-blk device failed.
+
+4.       Suspend or shutdown VM will leads to qemu crash
+
+
+
+from gdb:
+
+
+(gdb) bt
+#0  object_get_class (address@hidden) at qom/object.c:750
+#1  0x00007f9a72582e01 in virtio_vmstate_change (opaque=0x7f9a73d10960, 
+running=0, state=<optimized out>) at 
+/mnt/sdb/lzc/code/open/qemu/hw/virtio/virtio.c:2203
+#2  0x00007f9a7261ef52 in vm_state_notify (address@hidden, address@hidden) at 
+vl.c:1685
+#3  0x00007f9a7252603a in do_vm_stop (state=RUN_STATE_PAUSED) at 
+/mnt/sdb/lzc/code/open/qemu/cpus.c:941
+#4  vm_stop (address@hidden) at /mnt/sdb/lzc/code/open/qemu/cpus.c:1807
+#5  0x00007f9a7262eb1b in qmp_stop (address@hidden) at qmp.c:102
+#6  0x00007f9a7262c70a in qmp_marshal_stop (args=<optimized out>, 
+ret=<optimized out>, errp=0x7ffe63e255d8) at qmp-marshal.c:5854
+#7  0x00007f9a72897e79 in do_qmp_dispatch (errp=0x7ffe63e255d0, 
+request=0x7f9a76510120, cmds=0x7f9a72ee7980 <qmp_commands>) at 
+qapi/qmp-dispatch.c:104
+#8  qmp_dispatch (cmds=0x7f9a72ee7980 <qmp_commands>, address@hidden) at 
+qapi/qmp-dispatch.c:131
+#9  0x00007f9a725288d5 in handle_qmp_command (parser=<optimized out>, 
+tokens=<optimized out>) at /mnt/sdb/lzc/code/open/qemu/monitor.c:3852
+#10 0x00007f9a7289d514 in json_message_process_token (lexer=0x7f9a73ce4498, 
+input=0x7f9a73cc6880, type=JSON_RCURLY, x=36, y=17) at 
+qobject/json-streamer.c:105
+#11 0x00007f9a728bb69b in json_lexer_feed_char (address@hidden, ch=125 '}', 
+address@hidden) at qobject/json-lexer.c:323
+#12 0x00007f9a728bb75e in json_lexer_feed (lexer=0x7f9a73ce4498, 
+buffer=<optimized out>, size=<optimized out>) at qobject/json-lexer.c:373
+#13 0x00007f9a7289d5d9 in json_message_parser_feed (parser=<optimized out>, 
+buffer=<optimized out>, size=<optimized out>) at qobject/json-streamer.c:124
+#14 0x00007f9a7252722e in monitor_qmp_read (opaque=<optimized out>, 
+buf=<optimized out>, size=<optimized out>) at 
+/mnt/sdb/lzc/code/open/qemu/monitor.c:3894
+#15 0x00007f9a7284ee1b in tcp_chr_read (chan=<optimized out>, cond=<optimized 
+out>, opaque=<optimized out>) at chardev/char-socket.c:441
+#16 0x00007f9a6e03e99a in g_main_context_dispatch () from 
+/usr/lib64/libglib-2.0.so.0
+#17 0x00007f9a728a342c in glib_pollfds_poll () at util/main-loop.c:214
+#18 os_host_main_loop_wait (timeout=<optimized out>) at util/main-loop.c:261
+#19 main_loop_wait (address@hidden) at util/main-loop.c:515
+#20 0x00007f9a724e7547 in main_loop () at vl.c:1999
+#21 main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at 
+vl.c:4877
+
+Problem happens in virtio_vmstate_change which is called by vm_state_notify,
+static void virtio_vmstate_change(void *opaque, int running, RunState state)
+{
+    VirtIODevice *vdev = opaque;
+    BusState *qbus = qdev_get_parent_bus(DEVICE(vdev));
+    VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
+    bool backend_run = running && (vdev->status & VIRTIO_CONFIG_S_DRIVER_OK);
+    vdev->vm_running = running;
+
+    if (backend_run) {
+        virtio_set_status(vdev, vdev->status);
+    }
+
+    if (k->vmstate_change) {
+        k->vmstate_change(qbus->parent, backend_run);
+    }
+
+    if (!backend_run) {
+        virtio_set_status(vdev, vdev->status);
+    }
+}
+
+Vdev's parent_bus is NULL, so qdev_get_parent_bus(DEVICE(vdev)) will crash.
+virtio_vmstate_change is added to the list vm_change_state_head at 
+virtio_blk_device_realize(virtio_init),
+but after hotplug virtio-blk failed, virtio_vmstate_change will not be removed 
+from vm_change_state_head.
+
+
+I apply a patch as follews:
+
+diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
+index 5884ce3..ea532dc 100644
+--- a/hw/virtio/virtio.c
++++ b/hw/virtio/virtio.c
+@@ -2491,6 +2491,7 @@ static void virtio_device_realize(DeviceState *dev, Error 
+**errp)
+     virtio_bus_device_plugged(vdev, &err);
+     if (err != NULL) {
+         error_propagate(errp, err);
++        vdc->unrealize(dev, NULL);
+         return;
+     }
+
+On Tue, Oct 31, 2017 at 05:19:08AM +0000, linzhecheng wrote:
+>
+I found that hotplug virtio-blk device will lead to qemu crash.
+The author posted a patch in a separate email thread.  Please see
+"[PATCH] fix: unrealize virtio device if we fail to hotplug it".
+
+>
+Re-production steps:
+>
+>
+1.       Run VM named vm001
+>
+>
+2.       Create a virtio-blk.xml which contains wrong configurations:
+>
+<disk device="lun" rawio="yes" type="block">
+>
+<driver cache="none" io="native" name="qemu" type="raw" />
+>
+<source dev="/dev/mapper/11-dm" />
+>
+<target bus="virtio" dev="vdx" />
+>
+</disk>
+>
+>
+3.       Run command : virsh attach-device vm001 vm001
+>
+>
+Libvirt will return err msg:
+>
+>
+error: Failed to attach device from blk-scsi.xml
+>
+>
+error: internal error: unable to execute QEMU command 'device_add': Please
+>
+set scsi=off for virtio-blk devices in order to use virtio 1.0
+>
+>
+it means hotplug virtio-blk device failed.
+>
+>
+4.       Suspend or shutdown VM will leads to qemu crash
+>
+>
+>
+>
+from gdb:
+>
+>
+>
+(gdb) bt
+>
+#0  object_get_class (address@hidden) at qom/object.c:750
+>
+#1  0x00007f9a72582e01 in virtio_vmstate_change (opaque=0x7f9a73d10960,
+>
+running=0, state=<optimized out>) at
+>
+/mnt/sdb/lzc/code/open/qemu/hw/virtio/virtio.c:2203
+>
+#2  0x00007f9a7261ef52 in vm_state_notify (address@hidden, address@hidden) at
+>
+vl.c:1685
+>
+#3  0x00007f9a7252603a in do_vm_stop (state=RUN_STATE_PAUSED) at
+>
+/mnt/sdb/lzc/code/open/qemu/cpus.c:941
+>
+#4  vm_stop (address@hidden) at /mnt/sdb/lzc/code/open/qemu/cpus.c:1807
+>
+#5  0x00007f9a7262eb1b in qmp_stop (address@hidden) at qmp.c:102
+>
+#6  0x00007f9a7262c70a in qmp_marshal_stop (args=<optimized out>,
+>
+ret=<optimized out>, errp=0x7ffe63e255d8) at qmp-marshal.c:5854
+>
+#7  0x00007f9a72897e79 in do_qmp_dispatch (errp=0x7ffe63e255d0,
+>
+request=0x7f9a76510120, cmds=0x7f9a72ee7980 <qmp_commands>) at
+>
+qapi/qmp-dispatch.c:104
+>
+#8  qmp_dispatch (cmds=0x7f9a72ee7980 <qmp_commands>, address@hidden) at
+>
+qapi/qmp-dispatch.c:131
+>
+#9  0x00007f9a725288d5 in handle_qmp_command (parser=<optimized out>,
+>
+tokens=<optimized out>) at /mnt/sdb/lzc/code/open/qemu/monitor.c:3852
+>
+#10 0x00007f9a7289d514 in json_message_process_token (lexer=0x7f9a73ce4498,
+>
+input=0x7f9a73cc6880, type=JSON_RCURLY, x=36, y=17) at
+>
+qobject/json-streamer.c:105
+>
+#11 0x00007f9a728bb69b in json_lexer_feed_char (address@hidden, ch=125 '}',
+>
+address@hidden) at qobject/json-lexer.c:323
+>
+#12 0x00007f9a728bb75e in json_lexer_feed (lexer=0x7f9a73ce4498,
+>
+buffer=<optimized out>, size=<optimized out>) at qobject/json-lexer.c:373
+>
+#13 0x00007f9a7289d5d9 in json_message_parser_feed (parser=<optimized out>,
+>
+buffer=<optimized out>, size=<optimized out>) at qobject/json-streamer.c:124
+>
+#14 0x00007f9a7252722e in monitor_qmp_read (opaque=<optimized out>,
+>
+buf=<optimized out>, size=<optimized out>) at
+>
+/mnt/sdb/lzc/code/open/qemu/monitor.c:3894
+>
+#15 0x00007f9a7284ee1b in tcp_chr_read (chan=<optimized out>, cond=<optimized
+>
+out>, opaque=<optimized out>) at chardev/char-socket.c:441
+>
+#16 0x00007f9a6e03e99a in g_main_context_dispatch () from
+>
+/usr/lib64/libglib-2.0.so.0
+>
+#17 0x00007f9a728a342c in glib_pollfds_poll () at util/main-loop.c:214
+>
+#18 os_host_main_loop_wait (timeout=<optimized out>) at util/main-loop.c:261
+>
+#19 main_loop_wait (address@hidden) at util/main-loop.c:515
+>
+#20 0x00007f9a724e7547 in main_loop () at vl.c:1999
+>
+#21 main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>)
+>
+at vl.c:4877
+>
+>
+Problem happens in virtio_vmstate_change which is called by vm_state_notify,
+>
+static void virtio_vmstate_change(void *opaque, int running, RunState state)
+>
+{
+>
+VirtIODevice *vdev = opaque;
+>
+BusState *qbus = qdev_get_parent_bus(DEVICE(vdev));
+>
+VirtioBusClass *k = VIRTIO_BUS_GET_CLASS(qbus);
+>
+bool backend_run = running && (vdev->status & VIRTIO_CONFIG_S_DRIVER_OK);
+>
+vdev->vm_running = running;
+>
+>
+if (backend_run) {
+>
+virtio_set_status(vdev, vdev->status);
+>
+}
+>
+>
+if (k->vmstate_change) {
+>
+k->vmstate_change(qbus->parent, backend_run);
+>
+}
+>
+>
+if (!backend_run) {
+>
+virtio_set_status(vdev, vdev->status);
+>
+}
+>
+}
+>
+>
+Vdev's parent_bus is NULL, so qdev_get_parent_bus(DEVICE(vdev)) will crash.
+>
+virtio_vmstate_change is added to the list vm_change_state_head at
+>
+virtio_blk_device_realize(virtio_init),
+>
+but after hotplug virtio-blk failed, virtio_vmstate_change will not be
+>
+removed from vm_change_state_head.
+>
+>
+>
+I apply a patch as follews:
+>
+>
+diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
+>
+index 5884ce3..ea532dc 100644
+>
+--- a/hw/virtio/virtio.c
+>
++++ b/hw/virtio/virtio.c
+>
+@@ -2491,6 +2491,7 @@ static void virtio_device_realize(DeviceState *dev,
+>
+Error **errp)
+>
+virtio_bus_device_plugged(vdev, &err);
+>
+if (err != NULL) {
+>
+error_propagate(errp, err);
+>
++        vdc->unrealize(dev, NULL);
+>
+return;
+>
+}
+signature.asc
+Description:
+PGP signature
+
diff --git a/results/classifier/009/other/57756589 b/results/classifier/009/other/57756589
new file mode 100644
index 000000000..94ca85fb3
--- /dev/null
+++ b/results/classifier/009/other/57756589
@@ -0,0 +1,1431 @@
+other: 0.899
+device: 0.853
+vnc: 0.851
+permissions: 0.842
+performance: 0.839
+semantic: 0.835
+boot: 0.827
+graphic: 0.824
+network: 0.822
+socket: 0.820
+PID: 0.819
+KVM: 0.817
+files: 0.816
+debug: 0.803
+
+[Qemu-devel] 答复: Re:   答复: Re:  答复: Re: [BUG]COLO failover hang
+
+amost like wiki,but panic in Primary Node.
+
+
+
+
+setp:
+
+1 
+
+Primary Node.
+
+x86_64-softmmu/qemu-system-x86_64 -enable-kvm -boot c -m 2048 -smp 2 -qmp stdio 
+-vnc :7 -name primary -cpu qemu64,+kvmclock -device piix3-usb-uhci -usb 
+-usbdevice tablet\
+
+  -drive 
+if=virtio,id=colo-disk0,driver=quorum,read-pattern=fifo,vote-threshold=1,
+
+   
+children.0.file.filename=/mnt/sdd/pure_IMG/linux/redhat/rhel_6.5_64_2U_ide,children.0.driver=qcow2
+ -S \
+
+  -netdev 
+tap,id=hn1,vhost=off,script=/etc/qemu-ifup2,downscript=/etc/qemu-ifdown2 \
+
+  -device e1000,id=e1,netdev=hn1,mac=52:a4:00:12:78:67 \
+
+  -netdev 
+tap,id=hn0,vhost=off,script=/etc/qemu-ifup,downscript=/etc/qemu-ifdown \
+
+  -device e1000,id=e0,netdev=hn0,mac=52:a4:00:12:78:66 \
+
+  -chardev socket,id=mirror0,host=9.61.1.8,port=9003,server,nowait -chardev 
+socket,id=compare1,host=9.61.1.8,port=9004,server,nowait \
+
+  -chardev socket,id=compare0,host=9.61.1.8,port=9001,server,nowait -chardev 
+socket,id=compare0-0,host=9.61.1.8,port=9001 \
+
+  -chardev socket,id=compare_out,host=9.61.1.8,port=9005,server,nowait \
+
+  -chardev socket,id=compare_out0,host=9.61.1.8,port=9005 \
+
+  -object filter-mirror,id=m0,netdev=hn0,queue=tx,outdev=mirror0 \
+
+  -object filter-redirector,netdev=hn0,id=redire0,queue=rx,indev=compare_out 
+-object filter-redirector,netdev=hn0,id=redire1,queue=rx,outdev=compare0 \
+
+  -object 
+colo-compare,id=comp0,primary_in=compare0-0,secondary_in=compare1,outdev=compare_out0
+
+2 Second node:
+
+x86_64-softmmu/qemu-system-x86_64 -boot c -m 2048 -smp 2 -qmp stdio -vnc :7 
+-name secondary -enable-kvm -cpu qemu64,+kvmclock -device piix3-usb-uhci -usb 
+-usbdevice tablet\
+
+  -drive 
+if=none,id=colo-disk0,file.filename=/mnt/sdd/pure_IMG/linux/redhat/rhel_6.5_64_2U_ide,driver=qcow2,node-name=node0
+ \
+
+  -drive 
+if=virtio,id=active-disk0,driver=replication,mode=secondary,file.driver=qcow2,top-id=active-disk0,file.file.filename=/mnt/ramfstest/active_disk.img,file.backing.driver=qcow2,file.backing.file.filename=/mnt/ramfstest/hidden_disk.img,file.backing.backing=colo-disk0
+  \
+
+   -netdev 
+tap,id=hn1,vhost=off,script=/etc/qemu-ifup2,downscript=/etc/qemu-ifdown2 \
+
+  -device e1000,id=e1,netdev=hn1,mac=52:a4:00:12:78:67 \
+
+  -netdev 
+tap,id=hn0,vhost=off,script=/etc/qemu-ifup,downscript=/etc/qemu-ifdown \
+
+  -device e1000,netdev=hn0,mac=52:a4:00:12:78:66 -chardev 
+socket,id=red0,host=9.61.1.8,port=9003 \
+
+  -chardev socket,id=red1,host=9.61.1.8,port=9004 \
+
+  -object filter-redirector,id=f1,netdev=hn0,queue=tx,indev=red0 \
+
+  -object filter-redirector,id=f2,netdev=hn0,queue=rx,outdev=red1 \
+
+  -object filter-rewriter,id=rew0,netdev=hn0,queue=all -incoming tcp:0:8888
+
+3  Secondary node:
+
+{'execute':'qmp_capabilities'}
+
+{ 'execute': 'nbd-server-start',
+
+  'arguments': {'addr': {'type': 'inet', 'data': {'host': '9.61.1.7', 'port': 
+'8889'} } }
+
+}
+
+{'execute': 'nbd-server-add', 'arguments': {'device': 'colo-disk0', 'writable': 
+true } }
+
+4:Primary Node:
+
+{'execute':'qmp_capabilities'}
+
+
+{ 'execute': 'human-monitor-command',
+
+  'arguments': {'command-line': 'drive_add -n buddy 
+driver=replication,mode=primary,file.driver=nbd,file.host=9.61.1.7,file.port=8889,file.export=colo-disk0,node-name=node0'}}
+
+{ 'execute':'x-blockdev-change', 'arguments':{'parent': 'colo-disk0', 'node': 
+'node0' } }
+
+{ 'execute': 'migrate-set-capabilities',
+
+      'arguments': {'capabilities': [ {'capability': 'x-colo', 'state': true } 
+] } }
+
+{ 'execute': 'migrate', 'arguments': {'uri': 'tcp:9.61.1.7:8888' } }
+
+
+
+
+then can see two runing VMs, whenever you make changes to PVM, SVM will be 
+synced.  
+
+
+
+
+5:Primary Node:
+
+echo c > /proc/sysrq-trigger
+
+
+
+
+6:Secondary node:
+
+{ 'execute': 'nbd-server-stop' }
+
+{ "execute": "x-colo-lost-heartbeat" }
+
+
+
+
+then can see the Secondary node hang at recvmsg recvmsg .
+
+
+
+
+
+
+
+
+
+
+
+
+原始邮件
+
+
+
+发件人: address@hidden
+收件人:王广10165992 address@hidden
+抄送人: address@hidden address@hidden
+日 期 :2017年03月21日 16:27
+主 题 :Re: [Qemu-devel]  答复: Re:  答复: Re: [BUG]COLO failover hang
+
+
+
+
+
+Hi,
+
+On 2017/3/21 16:10, address@hidden wrote:
+> Thank you。
+>
+> I have test aready。
+>
+> When the Primary Node panic,the Secondary Node qemu hang at the same place。
+>
+> Incorrding
+http://wiki.qemu-project.org/Features/COLO
+,kill Primary Node qemu 
+will not produce the problem,but Primary Node panic can。
+>
+> I think due to the feature of channel does not support 
+QIO_CHANNEL_FEATURE_SHUTDOWN.
+>
+>
+
+Yes, you are right, when we do failover for primary/secondary VM, we will 
+shutdown the related
+fd in case it is stuck in the read/write fd.
+
+It seems that you didn't follow the above introduction exactly to do the test. 
+Could you
+share your test procedures ? Especially the commands used in the test.
+
+Thanks,
+Hailiang
+
+> when failover,channel_shutdown could not shut down the channel.
+>
+>
+> so the colo_process_incoming_thread will hang at recvmsg.
+>
+>
+> I test a patch:
+>
+>
+> diff --git a/migration/socket.c b/migration/socket.c
+>
+>
+> index 13966f1..d65a0ea 100644
+>
+>
+> --- a/migration/socket.c
+>
+>
+> +++ b/migration/socket.c
+>
+>
+> @@ -147,8 +147,9 @@ static gboolean 
+socket_accept_incoming_migration(QIOChannel *ioc,
+>
+>
+>       }
+>
+>
+>
+>
+>
+>       trace_migration_socket_incoming_accepted()
+>
+>
+>
+>
+>
+>       qio_channel_set_name(QIO_CHANNEL(sioc), "migration-socket-incoming")
+>
+>
+> +    qio_channel_set_feature(QIO_CHANNEL(sioc), QIO_CHANNEL_FEATURE_SHUTDOWN)
+>
+>
+>       migration_channel_process_incoming(migrate_get_current(),
+>
+>
+>                                          QIO_CHANNEL(sioc))
+>
+>
+>       object_unref(OBJECT(sioc))
+>
+>
+>
+>
+> My test will not hang any more.
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+> 原始邮件
+>
+>
+>
+> 发件人: address@hidden
+> 收件人:王广10165992 address@hidden
+> 抄送人: address@hidden address@hidden
+> 日 期 :2017年03月21日 15:58
+> 主 题 :Re: [Qemu-devel]  答复: Re:  [BUG]COLO failover hang
+>
+>
+>
+>
+>
+> Hi,Wang.
+>
+> You can test this branch:
+>
+>
+https://github.com/coloft/qemu/tree/colo-v5.1-developing-COLO-frame-v21-with-shared-disk
+>
+> and please follow wiki ensure your own configuration correctly.
+>
+>
+http://wiki.qemu-project.org/Features/COLO
+>
+>
+> Thanks
+>
+> Zhang Chen
+>
+>
+> On 03/21/2017 03:27 PM, address@hidden wrote:
+> >
+> > hi.
+> >
+> > I test the git qemu master have the same problem.
+> >
+> > (gdb) bt
+> >
+> > #0  qio_channel_socket_readv (ioc=0x7f65911b4e50, iov=0x7f64ef3fd880,
+> > niov=1, fds=0x0, nfds=0x0, errp=0x0) at io/channel-socket.c:461
+> >
+> > #1  0x00007f658e4aa0c2 in qio_channel_read
+> > (address@hidden, address@hidden "",
+> > address@hidden, address@hidden) at io/channel.c:114
+> >
+> > #2  0x00007f658e3ea990 in channel_get_buffer (opaque=<optimized out>,
+> > buf=0x7f65907cb838 "", pos=<optimized out>, size=32768) at
+> > migration/qemu-file-channel.c:78
+> >
+> > #3  0x00007f658e3e97fc in qemu_fill_buffer (f=0x7f65907cb800) at
+> > migration/qemu-file.c:295
+> >
+> > #4  0x00007f658e3ea2e1 in qemu_peek_byte (address@hidden,
+> > address@hidden) at migration/qemu-file.c:555
+> >
+> > #5  0x00007f658e3ea34b in qemu_get_byte (address@hidden) at
+> > migration/qemu-file.c:568
+> >
+> > #6  0x00007f658e3ea552 in qemu_get_be32 (address@hidden) at
+> > migration/qemu-file.c:648
+> >
+> > #7  0x00007f658e3e66e5 in colo_receive_message (f=0x7f65907cb800,
+> > address@hidden) at migration/colo.c:244
+> >
+> > #8  0x00007f658e3e681e in colo_receive_check_message (f=<optimized
+> > out>, address@hidden,
+> > address@hidden)
+> >
+> >     at migration/colo.c:264
+> >
+> > #9  0x00007f658e3e740e in colo_process_incoming_thread
+> > (opaque=0x7f658eb30360 <mis_current.31286>) at migration/colo.c:577
+> >
+> > #10 0x00007f658be09df3 in start_thread () from /lib64/libpthread.so.0
+> >
+> > #11 0x00007f65881983ed in clone () from /lib64/libc.so.6
+> >
+> > (gdb) p ioc->name
+> >
+> > $2 = 0x7f658ff7d5c0 "migration-socket-incoming"
+> >
+> > (gdb) p ioc->features        Do not support QIO_CHANNEL_FEATURE_SHUTDOWN
+> >
+> > $3 = 0
+> >
+> >
+> > (gdb) bt
+> >
+> > #0  socket_accept_incoming_migration (ioc=0x7fdcceeafa90,
+> > condition=G_IO_IN, opaque=0x7fdcceeafa90) at migration/socket.c:137
+> >
+> > #1  0x00007fdcc6966350 in g_main_dispatch (context=<optimized out>) at
+> > gmain.c:3054
+> >
+> > #2  g_main_context_dispatch (context=<optimized out>,
+> > address@hidden) at gmain.c:3630
+> >
+> > #3  0x00007fdccb8a6dcc in glib_pollfds_poll () at util/main-loop.c:213
+> >
+> > #4  os_host_main_loop_wait (timeout=<optimized out>) at
+> > util/main-loop.c:258
+> >
+> > #5  main_loop_wait (address@hidden) at
+> > util/main-loop.c:506
+> >
+> > #6  0x00007fdccb526187 in main_loop () at vl.c:1898
+> >
+> > #7  main (argc=<optimized out>, argv=<optimized out>, envp=<optimized
+> > out>) at vl.c:4709
+> >
+> > (gdb) p ioc->features
+> >
+> > $1 = 6
+> >
+> > (gdb) p ioc->name
+> >
+> > $2 = 0x7fdcce1b1ab0 "migration-socket-listener"
+> >
+> >
+> > May be socket_accept_incoming_migration should
+> > call qio_channel_set_feature(ioc, QIO_CHANNEL_FEATURE_SHUTDOWN)??
+> >
+> >
+> > thank you.
+> >
+> >
+> >
+> >
+> >
+> > 原始邮件
+> > address@hidden
+> > address@hidden
+> > address@hidden@huawei.com>
+> > *日 期 :*2017年03月16日 14:46
+> > *主 题 :**Re: [Qemu-devel] COLO failover hang*
+> >
+> >
+> >
+> >
+> > On 03/15/2017 05:06 PM, wangguang wrote:
+> > >   am testing QEMU COLO feature described here [QEMU
+> > > Wiki](
+http://wiki.qemu-project.org/Features/COLO
+).
+> > >
+> > > When the Primary Node panic,the Secondary Node qemu hang.
+> > > hang at recvmsg in qio_channel_socket_readv.
+> > > And  I run  { 'execute': 'nbd-server-stop' } and { "execute":
+> > > "x-colo-lost-heartbeat" } in Secondary VM's
+> > > monitor,the  Secondary Node qemu still hang at recvmsg .
+> > >
+> > > I found that the colo in qemu is not complete yet.
+> > > Do the colo have any plan for development?
+> >
+> > Yes, We are developing. You can see some of patch we pushing.
+> >
+> > > Has anyone ever run it successfully? Any help is appreciated!
+> >
+> > In our internal version can run it successfully,
+> > The failover detail you can ask Zhanghailiang for help.
+> > Next time if you have some question about COLO,
+> > please cc me and zhanghailiang address@hidden
+> >
+> >
+> > Thanks
+> > Zhang Chen
+> >
+> >
+> > >
+> > >
+> > >
+> > > centos7.2+qemu2.7.50
+> > > (gdb) bt
+> > > #0  0x00007f3e00cc86ad in recvmsg () from /lib64/libpthread.so.0
+> > > #1  0x00007f3e0332b738 in qio_channel_socket_readv (ioc=<optimized out>,
+> > > iov=<optimized out>, niov=<optimized out>, fds=0x0, nfds=0x0, errp=0x0) at
+> > > io/channel-socket.c:497
+> > > #2  0x00007f3e03329472 in qio_channel_read (address@hidden,
+> > > address@hidden "", address@hidden,
+> > > address@hidden) at io/channel.c:97
+> > > #3  0x00007f3e032750e0 in channel_get_buffer (opaque=<optimized out>,
+> > > buf=0x7f3e05910f38 "", pos=<optimized out>, size=32768) at
+> > > migration/qemu-file-channel.c:78
+> > > #4  0x00007f3e0327412c in qemu_fill_buffer (f=0x7f3e05910f00) at
+> > > migration/qemu-file.c:257
+> > > #5  0x00007f3e03274a41 in qemu_peek_byte (address@hidden,
+> > > address@hidden) at migration/qemu-file.c:510
+> > > #6  0x00007f3e03274aab in qemu_get_byte (address@hidden) at
+> > > migration/qemu-file.c:523
+> > > #7  0x00007f3e03274cb2 in qemu_get_be32 (address@hidden) at
+> > > migration/qemu-file.c:603
+> > > #8  0x00007f3e03271735 in colo_receive_message (f=0x7f3e05910f00,
+> > > address@hidden) at migration/colo..c:215
+> > > #9  0x00007f3e0327250d in colo_wait_handle_message (errp=0x7f3d62bfaa48,
+> > > checkpoint_request=<synthetic pointer>, f=<optimized out>) at
+> > > migration/colo.c:546
+> > > #10 colo_process_incoming_thread (opaque=0x7f3e067245e0) at
+> > > migration/colo.c:649
+> > > #11 0x00007f3e00cc1df3 in start_thread () from /lib64/libpthread.so.0
+> > > #12 0x00007f3dfc9c03ed in clone () from /lib64/libc.so.6
+> > >
+> > >
+> > >
+> > >
+> > >
+> > > --
+> > > View this message in context:
+http://qemu.11.n7.nabble.com/COLO-failover-hang-tp473250.html
+> > > Sent from the Developer mailing list archive at Nabble.com.
+> > >
+> > >
+> > >
+> > >
+> >
+> > --
+> > Thanks
+> > Zhang Chen
+> >
+> >
+> >
+> >
+> >
+>
+
+diff --git a/migration/socket.c b/migration/socket.c
+
+
+index 13966f1..d65a0ea 100644
+
+
+--- a/migration/socket.c
+
+
++++ b/migration/socket.c
+
+
+@@ -147,8 +147,9 @@ static gboolean socket_accept_incoming_migration(QIOChannel 
+*ioc,
+
+
+     }
+
+
+ 
+
+
+     trace_migration_socket_incoming_accepted()
+
+
+    
+
+
+     qio_channel_set_name(QIO_CHANNEL(sioc), "migration-socket-incoming")
+
+
++    qio_channel_set_feature(QIO_CHANNEL(sioc), QIO_CHANNEL_FEATURE_SHUTDOWN)
+
+
+     migration_channel_process_incoming(migrate_get_current(),
+
+
+                                        QIO_CHANNEL(sioc))
+
+
+     object_unref(OBJECT(sioc))
+
+
+
+
+Is this patch ok? 
+
+I have test it . The test could not hang any more.
+
+
+
+
+
+
+
+
+
+
+
+
+原始邮件
+
+
+
+发件人: address@hidden
+收件人: address@hidden address@hidden
+抄送人: address@hidden address@hidden address@hidden
+日 期 :2017年03月22日 09:11
+主 题 :Re: [Qemu-devel]  答复: Re:  答复: Re: [BUG]COLO failover hang
+
+
+
+
+
+On 2017/3/21 19:56, Dr. David Alan Gilbert wrote:
+> * Hailiang Zhang (address@hidden) wrote:
+>> Hi,
+>>
+>> Thanks for reporting this, and i confirmed it in my test, and it is a bug.
+>>
+>> Though we tried to call qemu_file_shutdown() to shutdown the related fd, in
+>> case COLO thread/incoming thread is stuck in read/write() while do failover,
+>> but it didn't take effect, because all the fd used by COLO (also migration)
+>> has been wrapped by qio channel, and it will not call the shutdown API if
+>> we didn't qio_channel_set_feature(QIO_CHANNEL(sioc), 
+QIO_CHANNEL_FEATURE_SHUTDOWN).
+>>
+>> Cc: Dr. David Alan Gilbert address@hidden
+>>
+>> I doubted migration cancel has the same problem, it may be stuck in write()
+>> if we tried to cancel migration.
+>>
+>> void fd_start_outgoing_migration(MigrationState *s, const char *fdname, 
+Error **errp)
+>> {
+>>      qio_channel_set_name(QIO_CHANNEL(ioc), "migration-fd-outgoing")
+>>      migration_channel_connect(s, ioc, NULL)
+>>      ... ...
+>> We didn't call qio_channel_set_feature(QIO_CHANNEL(sioc), 
+QIO_CHANNEL_FEATURE_SHUTDOWN) above,
+>> and the
+>> migrate_fd_cancel()
+>> {
+>>   ... ...
+>>      if (s->state == MIGRATION_STATUS_CANCELLING && f) {
+>>          qemu_file_shutdown(f)  --> This will not take effect. No ?
+>>      }
+>> }
+>
+> (cc'd in Daniel Berrange).
+> I see that we call qio_channel_set_feature(ioc, QIO_CHANNEL_FEATURE_SHUTDOWN) 
+at the
+> top of qio_channel_socket_new  so I think that's safe isn't it?
+>
+
+Hmm, you are right, this problem is only exist for the migration incoming fd, 
+thanks.
+
+> Dave
+>
+>> Thanks,
+>> Hailiang
+>>
+>> On 2017/3/21 16:10, address@hidden wrote:
+>>> Thank you。
+>>>
+>>> I have test aready。
+>>>
+>>> When the Primary Node panic,the Secondary Node qemu hang at the same place。
+>>>
+>>> Incorrding
+http://wiki.qemu-project.org/Features/COLO
+,kill Primary Node 
+qemu will not produce the problem,but Primary Node panic can。
+>>>
+>>> I think due to the feature of channel does not support 
+QIO_CHANNEL_FEATURE_SHUTDOWN.
+>>>
+>>>
+>>> when failover,channel_shutdown could not shut down the channel.
+>>>
+>>>
+>>> so the colo_process_incoming_thread will hang at recvmsg.
+>>>
+>>>
+>>> I test a patch:
+>>>
+>>>
+>>> diff --git a/migration/socket.c b/migration/socket.c
+>>>
+>>>
+>>> index 13966f1..d65a0ea 100644
+>>>
+>>>
+>>> --- a/migration/socket.c
+>>>
+>>>
+>>> +++ b/migration/socket.c
+>>>
+>>>
+>>> @@ -147,8 +147,9 @@ static gboolean 
+socket_accept_incoming_migration(QIOChannel *ioc,
+>>>
+>>>
+>>>        }
+>>>
+>>>
+>>>
+>>>
+>>>
+>>>        trace_migration_socket_incoming_accepted()
+>>>
+>>>
+>>>
+>>>
+>>>
+>>>        qio_channel_set_name(QIO_CHANNEL(sioc), "migration-socket-incoming")
+>>>
+>>>
+>>> +    qio_channel_set_feature(QIO_CHANNEL(sioc), 
+QIO_CHANNEL_FEATURE_SHUTDOWN)
+>>>
+>>>
+>>>        migration_channel_process_incoming(migrate_get_current(),
+>>>
+>>>
+>>>                                           QIO_CHANNEL(sioc))
+>>>
+>>>
+>>>        object_unref(OBJECT(sioc))
+>>>
+>>>
+>>>
+>>>
+>>> My test will not hang any more.
+>>>
+>>>
+>>>
+>>>
+>>>
+>>>
+>>>
+>>>
+>>>
+>>>
+>>>
+>>>
+>>>
+>>>
+>>>
+>>>
+>>>
+>>> 原始邮件
+>>>
+>>>
+>>>
+>>> 发件人: address@hidden
+>>> 收件人:王广10165992 address@hidden
+>>> 抄送人: address@hidden address@hidden
+>>> 日 期 :2017年03月21日 15:58
+>>> 主 题 :Re: [Qemu-devel]  答复: Re:  [BUG]COLO failover hang
+>>>
+>>>
+>>>
+>>>
+>>>
+>>> Hi,Wang.
+>>>
+>>> You can test this branch:
+>>>
+>>>
+https://github.com/coloft/qemu/tree/colo-v5.1-developing-COLO-frame-v21-with-shared-disk
+>>>
+>>> and please follow wiki ensure your own configuration correctly.
+>>>
+>>>
+http://wiki.qemu-project.org/Features/COLO
+>>>
+>>>
+>>> Thanks
+>>>
+>>> Zhang Chen
+>>>
+>>>
+>>> On 03/21/2017 03:27 PM, address@hidden wrote:
+>>> >
+>>> > hi.
+>>> >
+>>> > I test the git qemu master have the same problem.
+>>> >
+>>> > (gdb) bt
+>>> >
+>>> > #0  qio_channel_socket_readv (ioc=0x7f65911b4e50, iov=0x7f64ef3fd880,
+>>> > niov=1, fds=0x0, nfds=0x0, errp=0x0) at io/channel-socket.c:461
+>>> >
+>>> > #1  0x00007f658e4aa0c2 in qio_channel_read
+>>> > (address@hidden, address@hidden "",
+>>> > address@hidden, address@hidden) at io/channel.c:114
+>>> >
+>>> > #2  0x00007f658e3ea990 in channel_get_buffer (opaque=<optimized out>,
+>>> > buf=0x7f65907cb838 "", pos=<optimized out>, size=32768) at
+>>> > migration/qemu-file-channel.c:78
+>>> >
+>>> > #3  0x00007f658e3e97fc in qemu_fill_buffer (f=0x7f65907cb800) at
+>>> > migration/qemu-file.c:295
+>>> >
+>>> > #4  0x00007f658e3ea2e1 in qemu_peek_byte (address@hidden,
+>>> > address@hidden) at migration/qemu-file.c:555
+>>> >
+>>> > #5  0x00007f658e3ea34b in qemu_get_byte (address@hidden) at
+>>> > migration/qemu-file.c:568
+>>> >
+>>> > #6  0x00007f658e3ea552 in qemu_get_be32 (address@hidden) at
+>>> > migration/qemu-file.c:648
+>>> >
+>>> > #7  0x00007f658e3e66e5 in colo_receive_message (f=0x7f65907cb800,
+>>> > address@hidden) at migration/colo.c:244
+>>> >
+>>> > #8  0x00007f658e3e681e in colo_receive_check_message (f=<optimized
+>>> > out>, address@hidden,
+>>> > address@hidden)
+>>> >
+>>> >     at migration/colo.c:264
+>>> >
+>>> > #9  0x00007f658e3e740e in colo_process_incoming_thread
+>>> > (opaque=0x7f658eb30360 <mis_current.31286>) at migration/colo.c:577
+>>> >
+>>> > #10 0x00007f658be09df3 in start_thread () from /lib64/libpthread.so.0
+>>> >
+>>> > #11 0x00007f65881983ed in clone () from /lib64/libc.so.6
+>>> >
+>>> > (gdb) p ioc->name
+>>> >
+>>> > $2 = 0x7f658ff7d5c0 "migration-socket-incoming"
+>>> >
+>>> > (gdb) p ioc->features        Do not support QIO_CHANNEL_FEATURE_SHUTDOWN
+>>> >
+>>> > $3 = 0
+>>> >
+>>> >
+>>> > (gdb) bt
+>>> >
+>>> > #0  socket_accept_incoming_migration (ioc=0x7fdcceeafa90,
+>>> > condition=G_IO_IN, opaque=0x7fdcceeafa90) at migration/socket.c:137
+>>> >
+>>> > #1  0x00007fdcc6966350 in g_main_dispatch (context=<optimized out>) at
+>>> > gmain.c:3054
+>>> >
+>>> > #2  g_main_context_dispatch (context=<optimized out>,
+>>> > address@hidden) at gmain.c:3630
+>>> >
+>>> > #3  0x00007fdccb8a6dcc in glib_pollfds_poll () at util/main-loop.c:213
+>>> >
+>>> > #4  os_host_main_loop_wait (timeout=<optimized out>) at
+>>> > util/main-loop.c:258
+>>> >
+>>> > #5  main_loop_wait (address@hidden) at
+>>> > util/main-loop.c:506
+>>> >
+>>> > #6  0x00007fdccb526187 in main_loop () at vl.c:1898
+>>> >
+>>> > #7  main (argc=<optimized out>, argv=<optimized out>, envp=<optimized
+>>> > out>) at vl.c:4709
+>>> >
+>>> > (gdb) p ioc->features
+>>> >
+>>> > $1 = 6
+>>> >
+>>> > (gdb) p ioc->name
+>>> >
+>>> > $2 = 0x7fdcce1b1ab0 "migration-socket-listener"
+>>> >
+>>> >
+>>> > May be socket_accept_incoming_migration should
+>>> > call qio_channel_set_feature(ioc, QIO_CHANNEL_FEATURE_SHUTDOWN)??
+>>> >
+>>> >
+>>> > thank you.
+>>> >
+>>> >
+>>> >
+>>> >
+>>> >
+>>> > 原始邮件
+>>> > address@hidden
+>>> > address@hidden
+>>> > address@hidden@huawei.com>
+>>> > *日 期 :*2017年03月16日 14:46
+>>> > *主 题 :**Re: [Qemu-devel] COLO failover hang*
+>>> >
+>>> >
+>>> >
+>>> >
+>>> > On 03/15/2017 05:06 PM, wangguang wrote:
+>>> > >   am testing QEMU COLO feature described here [QEMU
+>>> > > Wiki](
+http://wiki.qemu-project.org/Features/COLO
+).
+>>> > >
+>>> > > When the Primary Node panic,the Secondary Node qemu hang.
+>>> > > hang at recvmsg in qio_channel_socket_readv.
+>>> > > And  I run  { 'execute': 'nbd-server-stop' } and { "execute":
+>>> > > "x-colo-lost-heartbeat" } in Secondary VM's
+>>> > > monitor,the  Secondary Node qemu still hang at recvmsg .
+>>> > >
+>>> > > I found that the colo in qemu is not complete yet.
+>>> > > Do the colo have any plan for development?
+>>> >
+>>> > Yes, We are developing. You can see some of patch we pushing.
+>>> >
+>>> > > Has anyone ever run it successfully? Any help is appreciated!
+>>> >
+>>> > In our internal version can run it successfully,
+>>> > The failover detail you can ask Zhanghailiang for help.
+>>> > Next time if you have some question about COLO,
+>>> > please cc me and zhanghailiang address@hidden
+>>> >
+>>> >
+>>> > Thanks
+>>> > Zhang Chen
+>>> >
+>>> >
+>>> > >
+>>> > >
+>>> > >
+>>> > > centos7.2+qemu2.7.50
+>>> > > (gdb) bt
+>>> > > #0  0x00007f3e00cc86ad in recvmsg () from /lib64/libpthread.so.0
+>>> > > #1  0x00007f3e0332b738 in qio_channel_socket_readv (ioc=<optimized out>,
+>>> > > iov=<optimized out>, niov=<optimized out>, fds=0x0, nfds=0x0, errp=0x0) 
+at
+>>> > > io/channel-socket.c:497
+>>> > > #2  0x00007f3e03329472 in qio_channel_read (address@hidden,
+>>> > > address@hidden "", address@hidden,
+>>> > > address@hidden) at io/channel.c:97
+>>> > > #3  0x00007f3e032750e0 in channel_get_buffer (opaque=<optimized out>,
+>>> > > buf=0x7f3e05910f38 "", pos=<optimized out>, size=32768) at
+>>> > > migration/qemu-file-channel.c:78
+>>> > > #4  0x00007f3e0327412c in qemu_fill_buffer (f=0x7f3e05910f00) at
+>>> > > migration/qemu-file.c:257
+>>> > > #5  0x00007f3e03274a41 in qemu_peek_byte (address@hidden,
+>>> > > address@hidden) at migration/qemu-file.c:510
+>>> > > #6  0x00007f3e03274aab in qemu_get_byte (address@hidden) at
+>>> > > migration/qemu-file.c:523
+>>> > > #7  0x00007f3e03274cb2 in qemu_get_be32 (address@hidden) at
+>>> > > migration/qemu-file.c:603
+>>> > > #8  0x00007f3e03271735 in colo_receive_message (f=0x7f3e05910f00,
+>>> > > address@hidden) at migration/colo.c:215
+>>> > > #9  0x00007f3e0327250d in colo_wait_handle_message (errp=0x7f3d62bfaa48,
+>>> > > checkpoint_request=<synthetic pointer>, f=<optimized out>) at
+>>> > > migration/colo.c:546
+>>> > > #10 colo_process_incoming_thread (opaque=0x7f3e067245e0) at
+>>> > > migration/colo.c:649
+>>> > > #11 0x00007f3e00cc1df3 in start_thread () from /lib64/libpthread.so.0
+>>> > > #12 0x00007f3dfc9c03ed in clone () from /lib64/libc..so.6
+>>> > >
+>>> > >
+>>> > >
+>>> > >
+>>> > >
+>>> > > --
+>>> > > View this message in context:
+http://qemu.11.n7.nabble.com/COLO-failover-hang-tp473250.html
+>>> > > Sent from the Developer mailing list archive at Nabble.com.
+>>> > >
+>>> > >
+>>> > >
+>>> > >
+>>> >
+>>> > --
+>>> > Thanks
+>>> > Zhang Chen
+>>> >
+>>> >
+>>> >
+>>> >
+>>> >
+>>>
+>>
+> --
+> Dr. David Alan Gilbert / address@hidden / Manchester, UK
+>
+> .
+>
+
+Hi,
+
+On 2017/3/22 9:42, address@hidden wrote:
+diff --git a/migration/socket.c b/migration/socket.c
+
+
+index 13966f1..d65a0ea 100644
+
+
+--- a/migration/socket.c
+
+
++++ b/migration/socket.c
+
+
+@@ -147,8 +147,9 @@ static gboolean socket_accept_incoming_migration(QIOChannel 
+*ioc,
+
+
+      }
+
+
+
+
+
+      trace_migration_socket_incoming_accepted()
+
+
+
+
+
+      qio_channel_set_name(QIO_CHANNEL(sioc), "migration-socket-incoming")
+
+
++    qio_channel_set_feature(QIO_CHANNEL(sioc), QIO_CHANNEL_FEATURE_SHUTDOWN)
+
+
+      migration_channel_process_incoming(migrate_get_current(),
+
+
+                                         QIO_CHANNEL(sioc))
+
+
+      object_unref(OBJECT(sioc))
+
+
+
+
+Is this patch ok?
+Yes, i think this works, but a better way maybe to call 
+qio_channel_set_feature()
+in qio_channel_socket_accept(), we didn't set the SHUTDOWN feature for the 
+socket accept fd,
+Or fix it by this:
+
+diff --git a/io/channel-socket.c b/io/channel-socket.c
+index f546c68..ce6894c 100644
+--- a/io/channel-socket.c
++++ b/io/channel-socket.c
+@@ -330,9 +330,8 @@ qio_channel_socket_accept(QIOChannelSocket *ioc,
+                           Error **errp)
+ {
+     QIOChannelSocket *cioc;
+-
+-    cioc = QIO_CHANNEL_SOCKET(object_new(TYPE_QIO_CHANNEL_SOCKET));
+-    cioc->fd = -1;
++
++    cioc = qio_channel_socket_new();
+     cioc->remoteAddrLen = sizeof(ioc->remoteAddr);
+     cioc->localAddrLen = sizeof(ioc->localAddr);
+
+
+Thanks,
+Hailiang
+I have test it . The test could not hang any more.
+
+
+
+
+
+
+
+
+
+
+
+
+原始邮件
+
+
+
+发件人: address@hidden
+收件人: address@hidden address@hidden
+抄送人: address@hidden address@hidden address@hidden
+日 期 :2017年03月22日 09:11
+主 题 :Re: [Qemu-devel]  答复: Re:  答复: Re: [BUG]COLO failover hang
+
+
+
+
+
+On 2017/3/21 19:56, Dr. David Alan Gilbert wrote:
+> * Hailiang Zhang (address@hidden) wrote:
+>> Hi,
+>>
+>> Thanks for reporting this, and i confirmed it in my test, and it is a bug.
+>>
+>> Though we tried to call qemu_file_shutdown() to shutdown the related fd, in
+>> case COLO thread/incoming thread is stuck in read/write() while do failover,
+>> but it didn't take effect, because all the fd used by COLO (also migration)
+>> has been wrapped by qio channel, and it will not call the shutdown API if
+>> we didn't qio_channel_set_feature(QIO_CHANNEL(sioc), 
+QIO_CHANNEL_FEATURE_SHUTDOWN).
+>>
+>> Cc: Dr. David Alan Gilbert address@hidden
+>>
+>> I doubted migration cancel has the same problem, it may be stuck in write()
+>> if we tried to cancel migration.
+>>
+>> void fd_start_outgoing_migration(MigrationState *s, const char *fdname, 
+Error **errp)
+>> {
+>>      qio_channel_set_name(QIO_CHANNEL(ioc), "migration-fd-outgoing")
+>>      migration_channel_connect(s, ioc, NULL)
+>>      ... ...
+>> We didn't call qio_channel_set_feature(QIO_CHANNEL(sioc), 
+QIO_CHANNEL_FEATURE_SHUTDOWN) above,
+>> and the
+>> migrate_fd_cancel()
+>> {
+>>   ... ...
+>>      if (s->state == MIGRATION_STATUS_CANCELLING && f) {
+>>          qemu_file_shutdown(f)  --> This will not take effect. No ?
+>>      }
+>> }
+>
+> (cc'd in Daniel Berrange).
+> I see that we call qio_channel_set_feature(ioc, QIO_CHANNEL_FEATURE_SHUTDOWN) 
+at the
+> top of qio_channel_socket_new  so I think that's safe isn't it?
+>
+
+Hmm, you are right, this problem is only exist for the migration incoming fd, 
+thanks.
+
+> Dave
+>
+>> Thanks,
+>> Hailiang
+>>
+>> On 2017/3/21 16:10, address@hidden wrote:
+>>> Thank you。
+>>>
+>>> I have test aready。
+>>>
+>>> When the Primary Node panic,the Secondary Node qemu hang at the same place。
+>>>
+>>> Incorrding
+http://wiki.qemu-project.org/Features/COLO
+,kill Primary Node 
+qemu will not produce the problem,but Primary Node panic can。
+>>>
+>>> I think due to the feature of channel does not support 
+QIO_CHANNEL_FEATURE_SHUTDOWN.
+>>>
+>>>
+>>> when failover,channel_shutdown could not shut down the channel.
+>>>
+>>>
+>>> so the colo_process_incoming_thread will hang at recvmsg.
+>>>
+>>>
+>>> I test a patch:
+>>>
+>>>
+>>> diff --git a/migration/socket.c b/migration/socket.c
+>>>
+>>>
+>>> index 13966f1..d65a0ea 100644
+>>>
+>>>
+>>> --- a/migration/socket.c
+>>>
+>>>
+>>> +++ b/migration/socket.c
+>>>
+>>>
+>>> @@ -147,8 +147,9 @@ static gboolean 
+socket_accept_incoming_migration(QIOChannel *ioc,
+>>>
+>>>
+>>>        }
+>>>
+>>>
+>>>
+>>>
+>>>
+>>>        trace_migration_socket_incoming_accepted()
+>>>
+>>>
+>>>
+>>>
+>>>
+>>>        qio_channel_set_name(QIO_CHANNEL(sioc), "migration-socket-incoming")
+>>>
+>>>
+>>> +    qio_channel_set_feature(QIO_CHANNEL(sioc), 
+QIO_CHANNEL_FEATURE_SHUTDOWN)
+>>>
+>>>
+>>>        migration_channel_process_incoming(migrate_get_current(),
+>>>
+>>>
+>>>                                           QIO_CHANNEL(sioc))
+>>>
+>>>
+>>>        object_unref(OBJECT(sioc))
+>>>
+>>>
+>>>
+>>>
+>>> My test will not hang any more.
+>>>
+>>>
+>>>
+>>>
+>>>
+>>>
+>>>
+>>>
+>>>
+>>>
+>>>
+>>>
+>>>
+>>>
+>>>
+>>>
+>>>
+>>> 原始邮件
+>>>
+>>>
+>>>
+>>> 发件人: address@hidden
+>>> 收件人:王广10165992 address@hidden
+>>> 抄送人: address@hidden address@hidden
+>>> 日 期 :2017年03月21日 15:58
+>>> 主 题 :Re: [Qemu-devel]  答复: Re:  [BUG]COLO failover hang
+>>>
+>>>
+>>>
+>>>
+>>>
+>>> Hi,Wang.
+>>>
+>>> You can test this branch:
+>>>
+>>>
+https://github.com/coloft/qemu/tree/colo-v5.1-developing-COLO-frame-v21-with-shared-disk
+>>>
+>>> and please follow wiki ensure your own configuration correctly.
+>>>
+>>>
+http://wiki.qemu-project.org/Features/COLO
+>>>
+>>>
+>>> Thanks
+>>>
+>>> Zhang Chen
+>>>
+>>>
+>>> On 03/21/2017 03:27 PM, address@hidden wrote:
+>>> >
+>>> > hi.
+>>> >
+>>> > I test the git qemu master have the same problem.
+>>> >
+>>> > (gdb) bt
+>>> >
+>>> > #0  qio_channel_socket_readv (ioc=0x7f65911b4e50, iov=0x7f64ef3fd880,
+>>> > niov=1, fds=0x0, nfds=0x0, errp=0x0) at io/channel-socket.c:461
+>>> >
+>>> > #1  0x00007f658e4aa0c2 in qio_channel_read
+>>> > (address@hidden, address@hidden "",
+>>> > address@hidden, address@hidden) at io/channel.c:114
+>>> >
+>>> > #2  0x00007f658e3ea990 in channel_get_buffer (opaque=<optimized out>,
+>>> > buf=0x7f65907cb838 "", pos=<optimized out>, size=32768) at
+>>> > migration/qemu-file-channel.c:78
+>>> >
+>>> > #3  0x00007f658e3e97fc in qemu_fill_buffer (f=0x7f65907cb800) at
+>>> > migration/qemu-file.c:295
+>>> >
+>>> > #4  0x00007f658e3ea2e1 in qemu_peek_byte (address@hidden,
+>>> > address@hidden) at migration/qemu-file.c:555
+>>> >
+>>> > #5  0x00007f658e3ea34b in qemu_get_byte (address@hidden) at
+>>> > migration/qemu-file.c:568
+>>> >
+>>> > #6  0x00007f658e3ea552 in qemu_get_be32 (address@hidden) at
+>>> > migration/qemu-file.c:648
+>>> >
+>>> > #7  0x00007f658e3e66e5 in colo_receive_message (f=0x7f65907cb800,
+>>> > address@hidden) at migration/colo.c:244
+>>> >
+>>> > #8  0x00007f658e3e681e in colo_receive_check_message (f=<optimized
+>>> > out>, address@hidden,
+>>> > address@hidden)
+>>> >
+>>> >     at migration/colo.c:264
+>>> >
+>>> > #9  0x00007f658e3e740e in colo_process_incoming_thread
+>>> > (opaque=0x7f658eb30360 <mis_current.31286>) at migration/colo.c:577
+>>> >
+>>> > #10 0x00007f658be09df3 in start_thread () from /lib64/libpthread.so.0
+>>> >
+>>> > #11 0x00007f65881983ed in clone () from /lib64/libc.so.6
+>>> >
+>>> > (gdb) p ioc->name
+>>> >
+>>> > $2 = 0x7f658ff7d5c0 "migration-socket-incoming"
+>>> >
+>>> > (gdb) p ioc->features        Do not support QIO_CHANNEL_FEATURE_SHUTDOWN
+>>> >
+>>> > $3 = 0
+>>> >
+>>> >
+>>> > (gdb) bt
+>>> >
+>>> > #0  socket_accept_incoming_migration (ioc=0x7fdcceeafa90,
+>>> > condition=G_IO_IN, opaque=0x7fdcceeafa90) at migration/socket.c:137
+>>> >
+>>> > #1  0x00007fdcc6966350 in g_main_dispatch (context=<optimized out>) at
+>>> > gmain.c:3054
+>>> >
+>>> > #2  g_main_context_dispatch (context=<optimized out>,
+>>> > address@hidden) at gmain.c:3630
+>>> >
+>>> > #3  0x00007fdccb8a6dcc in glib_pollfds_poll () at util/main-loop.c:213
+>>> >
+>>> > #4  os_host_main_loop_wait (timeout=<optimized out>) at
+>>> > util/main-loop.c:258
+>>> >
+>>> > #5  main_loop_wait (address@hidden) at
+>>> > util/main-loop.c:506
+>>> >
+>>> > #6  0x00007fdccb526187 in main_loop () at vl.c:1898
+>>> >
+>>> > #7  main (argc=<optimized out>, argv=<optimized out>, envp=<optimized
+>>> > out>) at vl.c:4709
+>>> >
+>>> > (gdb) p ioc->features
+>>> >
+>>> > $1 = 6
+>>> >
+>>> > (gdb) p ioc->name
+>>> >
+>>> > $2 = 0x7fdcce1b1ab0 "migration-socket-listener"
+>>> >
+>>> >
+>>> > May be socket_accept_incoming_migration should
+>>> > call qio_channel_set_feature(ioc, QIO_CHANNEL_FEATURE_SHUTDOWN)??
+>>> >
+>>> >
+>>> > thank you.
+>>> >
+>>> >
+>>> >
+>>> >
+>>> >
+>>> > 原始邮件
+>>> > address@hidden
+>>> > address@hidden
+>>> > address@hidden@huawei.com>
+>>> > *日 期 :*2017年03月16日 14:46
+>>> > *主 题 :**Re: [Qemu-devel] COLO failover hang*
+>>> >
+>>> >
+>>> >
+>>> >
+>>> > On 03/15/2017 05:06 PM, wangguang wrote:
+>>> > >   am testing QEMU COLO feature described here [QEMU
+>>> > > Wiki](
+http://wiki.qemu-project.org/Features/COLO
+).
+>>> > >
+>>> > > When the Primary Node panic,the Secondary Node qemu hang.
+>>> > > hang at recvmsg in qio_channel_socket_readv.
+>>> > > And  I run  { 'execute': 'nbd-server-stop' } and { "execute":
+>>> > > "x-colo-lost-heartbeat" } in Secondary VM's
+>>> > > monitor,the  Secondary Node qemu still hang at recvmsg .
+>>> > >
+>>> > > I found that the colo in qemu is not complete yet.
+>>> > > Do the colo have any plan for development?
+>>> >
+>>> > Yes, We are developing. You can see some of patch we pushing.
+>>> >
+>>> > > Has anyone ever run it successfully? Any help is appreciated!
+>>> >
+>>> > In our internal version can run it successfully,
+>>> > The failover detail you can ask Zhanghailiang for help.
+>>> > Next time if you have some question about COLO,
+>>> > please cc me and zhanghailiang address@hidden
+>>> >
+>>> >
+>>> > Thanks
+>>> > Zhang Chen
+>>> >
+>>> >
+>>> > >
+>>> > >
+>>> > >
+>>> > > centos7.2+qemu2.7.50
+>>> > > (gdb) bt
+>>> > > #0  0x00007f3e00cc86ad in recvmsg () from /lib64/libpthread.so.0
+>>> > > #1  0x00007f3e0332b738 in qio_channel_socket_readv (ioc=<optimized out>,
+>>> > > iov=<optimized out>, niov=<optimized out>, fds=0x0, nfds=0x0, errp=0x0) 
+at
+>>> > > io/channel-socket.c:497
+>>> > > #2  0x00007f3e03329472 in qio_channel_read (address@hidden,
+>>> > > address@hidden "", address@hidden,
+>>> > > address@hidden) at io/channel.c:97
+>>> > > #3  0x00007f3e032750e0 in channel_get_buffer (opaque=<optimized out>,
+>>> > > buf=0x7f3e05910f38 "", pos=<optimized out>, size=32768) at
+>>> > > migration/qemu-file-channel.c:78
+>>> > > #4  0x00007f3e0327412c in qemu_fill_buffer (f=0x7f3e05910f00) at
+>>> > > migration/qemu-file.c:257
+>>> > > #5  0x00007f3e03274a41 in qemu_peek_byte (address@hidden,
+>>> > > address@hidden) at migration/qemu-file.c:510
+>>> > > #6  0x00007f3e03274aab in qemu_get_byte (address@hidden) at
+>>> > > migration/qemu-file.c:523
+>>> > > #7  0x00007f3e03274cb2 in qemu_get_be32 (address@hidden) at
+>>> > > migration/qemu-file.c:603
+>>> > > #8  0x00007f3e03271735 in colo_receive_message (f=0x7f3e05910f00,
+>>> > > address@hidden) at migration/colo.c:215
+>>> > > #9  0x00007f3e0327250d in colo_wait_handle_message (errp=0x7f3d62bfaa48,
+>>> > > checkpoint_request=<synthetic pointer>, f=<optimized out>) at
+>>> > > migration/colo.c:546
+>>> > > #10 colo_process_incoming_thread (opaque=0x7f3e067245e0) at
+>>> > > migration/colo.c:649
+>>> > > #11 0x00007f3e00cc1df3 in start_thread () from /lib64/libpthread.so.0
+>>> > > #12 0x00007f3dfc9c03ed in clone () from /lib64/libc..so.6
+>>> > >
+>>> > >
+>>> > >
+>>> > >
+>>> > >
+>>> > > --
+>>> > > View this message in context:
+http://qemu.11.n7.nabble.com/COLO-failover-hang-tp473250.html
+>>> > > Sent from the Developer mailing list archive at Nabble.com.
+>>> > >
+>>> > >
+>>> > >
+>>> > >
+>>> >
+>>> > --
+>>> > Thanks
+>>> > Zhang Chen
+>>> >
+>>> >
+>>> >
+>>> >
+>>> >
+>>>
+>>
+> --
+> Dr. David Alan Gilbert / address@hidden / Manchester, UK
+>
+> .
+>
+
diff --git a/results/classifier/009/other/63565653 b/results/classifier/009/other/63565653
new file mode 100644
index 000000000..ade63b0d0
--- /dev/null
+++ b/results/classifier/009/other/63565653
@@ -0,0 +1,59 @@
+other: 0.898
+device: 0.889
+boot: 0.889
+PID: 0.887
+network: 0.861
+debug: 0.855
+performance: 0.834
+KVM: 0.827
+semantic: 0.825
+socket: 0.745
+permissions: 0.739
+graphic: 0.734
+files: 0.705
+vnc: 0.588
+
+[Qemu-devel] [BUG]pcibus_reset assertion failure on guest reboot
+
+Qemu-2.6.2
+
+Start a vm with vhost-net , do reboot and hot-unplug viritio-net nic in short 
+time, we touch 
+pcibus_reset assertion failure.
+
+Here is qemu log:
+22:29:46.359386+08:00  acpi_pm1_cnt_write -> guest do soft power off
+22:29:46.785310+08:00  qemu_devices_reset
+22:29:46.788093+08:00  virtio_pci_device_unplugged -> virtio net unpluged
+22:29:46.803427+08:00  pcibus_reset: Assertion `bus->irq_count[i] == 0' failed.
+
+Here is stack info: 
+(gdb) bt
+#0  0x00007f9a336795d7 in raise () from /usr/lib64/libc.so.6
+#1  0x00007f9a3367acc8 in abort () from /usr/lib64/libc.so.6
+#2  0x00007f9a33672546 in __assert_fail_base () from /usr/lib64/libc.so.6
+#3  0x00007f9a336725f2 in __assert_fail () from /usr/lib64/libc.so.6
+#4  0x0000000000641884 in pcibus_reset (qbus=0x29eee60) at hw/pci/pci.c:283
+#5  0x00000000005bfc30 in qbus_reset_one (bus=0x29eee60, opaque=<optimized 
+out>) at hw/core/qdev.c:319
+#6  0x00000000005c1b19 in qdev_walk_children (dev=0x29ed2b0, pre_devfn=0x0, 
+pre_busfn=0x0, post_devfn=0x5c2440 ...
+#7  0x00000000005c1c59 in qbus_walk_children (bus=0x2736f80, pre_devfn=0x0, 
+pre_busfn=0x0, post_devfn=0x5c2440 ...
+#8  0x00000000005513f5 in qemu_devices_reset () at vl.c:1998
+#9  0x00000000004cab9d in pc_machine_reset () at 
+/home/abuild/rpmbuild/BUILD/qemu-kvm-2.6.0/hw/i386/pc.c:1976
+#10 0x000000000055148b in qemu_system_reset (address@hidden) at vl.c:2011
+#11 0x000000000055164f in main_loop_should_exit () at vl.c:2169
+#12 0x0000000000551719 in main_loop () at vl.c:2212
+#13 0x000000000041c9a8 in main (argc=<optimized out>, argv=<optimized out>, 
+envp=<optimized out>) at vl.c:5130
+(gdb) f 4
+...
+(gdb) p bus->irq_count[0]
+$6 = 1
+
+Seems pci_update_irq_disabled doesn't work well
+
+can anyone help?
+
diff --git a/results/classifier/009/other/65781993 b/results/classifier/009/other/65781993
new file mode 100644
index 000000000..4655ac69e
--- /dev/null
+++ b/results/classifier/009/other/65781993
@@ -0,0 +1,2803 @@
+other: 0.727
+PID: 0.673
+debug: 0.673
+semantic: 0.665
+graphic: 0.664
+socket: 0.660
+permissions: 0.658
+network: 0.657
+files: 0.657
+device: 0.647
+performance: 0.636
+boot: 0.635
+KVM: 0.627
+vnc: 0.590
+
+[Qemu-devel] 答复: Re:   答复: Re:  [BUG]COLO failover hang
+
+Thank you。
+
+I have test aready。
+
+When the Primary Node panic,the Secondary Node qemu hang at the same place。
+
+Incorrding
+http://wiki.qemu-project.org/Features/COLO
+,kill Primary Node qemu 
+will not produce the problem,but Primary Node panic can。
+
+I think due to the feature of channel does not support 
+QIO_CHANNEL_FEATURE_SHUTDOWN.
+
+
+when failover,channel_shutdown could not shut down the channel.
+
+
+so the colo_process_incoming_thread will hang at recvmsg.
+
+
+I test a patch:
+
+
+diff --git a/migration/socket.c b/migration/socket.c
+
+
+index 13966f1..d65a0ea 100644
+
+
+--- a/migration/socket.c
+
+
++++ b/migration/socket.c
+
+
+@@ -147,8 +147,9 @@ static gboolean socket_accept_incoming_migration(QIOChannel 
+*ioc,
+
+
+     }
+
+
+ 
+
+
+     trace_migration_socket_incoming_accepted()
+
+
+    
+
+
+     qio_channel_set_name(QIO_CHANNEL(sioc), "migration-socket-incoming")
+
+
++    qio_channel_set_feature(QIO_CHANNEL(sioc), QIO_CHANNEL_FEATURE_SHUTDOWN)
+
+
+     migration_channel_process_incoming(migrate_get_current(),
+
+
+                                        QIO_CHANNEL(sioc))
+
+
+     object_unref(OBJECT(sioc))
+
+
+
+
+My test will not hang any more.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+原始邮件
+
+
+
+发件人: address@hidden
+收件人:王广10165992 address@hidden
+抄送人: address@hidden address@hidden
+日 期 :2017年03月21日 15:58
+主 题 :Re: [Qemu-devel]  答复: Re:  [BUG]COLO failover hang
+
+
+
+
+
+Hi,Wang.
+
+You can test this branch:
+https://github.com/coloft/qemu/tree/colo-v5.1-developing-COLO-frame-v21-with-shared-disk
+and please follow wiki ensure your own configuration correctly.
+http://wiki.qemu-project.org/Features/COLO
+Thanks
+
+Zhang Chen
+
+
+On 03/21/2017 03:27 PM, address@hidden wrote:
+>
+> hi.
+>
+> I test the git qemu master have the same problem.
+>
+> (gdb) bt
+>
+> #0  qio_channel_socket_readv (ioc=0x7f65911b4e50, iov=0x7f64ef3fd880, 
+> niov=1, fds=0x0, nfds=0x0, errp=0x0) at io/channel-socket.c:461
+>
+> #1  0x00007f658e4aa0c2 in qio_channel_read 
+> (address@hidden, address@hidden "", 
+> address@hidden, address@hidden) at io/channel.c:114
+>
+> #2  0x00007f658e3ea990 in channel_get_buffer (opaque=<optimized out>, 
+> buf=0x7f65907cb838 "", pos=<optimized out>, size=32768) at 
+> migration/qemu-file-channel.c:78
+>
+> #3  0x00007f658e3e97fc in qemu_fill_buffer (f=0x7f65907cb800) at 
+> migration/qemu-file.c:295
+>
+> #4  0x00007f658e3ea2e1 in qemu_peek_byte (address@hidden, 
+> address@hidden) at migration/qemu-file.c:555
+>
+> #5  0x00007f658e3ea34b in qemu_get_byte (address@hidden) at 
+> migration/qemu-file.c:568
+>
+> #6  0x00007f658e3ea552 in qemu_get_be32 (address@hidden) at 
+> migration/qemu-file.c:648
+>
+> #7  0x00007f658e3e66e5 in colo_receive_message (f=0x7f65907cb800, 
+> address@hidden) at migration/colo.c:244
+>
+> #8  0x00007f658e3e681e in colo_receive_check_message (f=<optimized 
+> out>, address@hidden, 
+> address@hidden)
+>
+>     at migration/colo.c:264
+>
+> #9  0x00007f658e3e740e in colo_process_incoming_thread 
+> (opaque=0x7f658eb30360 <mis_current.31286>) at migration/colo.c:577
+>
+> #10 0x00007f658be09df3 in start_thread () from /lib64/libpthread.so.0
+>
+> #11 0x00007f65881983ed in clone () from /lib64/libc.so.6
+>
+> (gdb) p ioc->name
+>
+> $2 = 0x7f658ff7d5c0 "migration-socket-incoming"
+>
+> (gdb) p ioc->features        Do not support QIO_CHANNEL_FEATURE_SHUTDOWN
+>
+> $3 = 0
+>
+>
+> (gdb) bt
+>
+> #0  socket_accept_incoming_migration (ioc=0x7fdcceeafa90, 
+> condition=G_IO_IN, opaque=0x7fdcceeafa90) at migration/socket.c:137
+>
+> #1  0x00007fdcc6966350 in g_main_dispatch (context=<optimized out>) at 
+> gmain.c:3054
+>
+> #2  g_main_context_dispatch (context=<optimized out>, 
+> address@hidden) at gmain.c:3630
+>
+> #3  0x00007fdccb8a6dcc in glib_pollfds_poll () at util/main-loop.c:213
+>
+> #4  os_host_main_loop_wait (timeout=<optimized out>) at 
+> util/main-loop.c:258
+>
+> #5  main_loop_wait (address@hidden) at 
+> util/main-loop.c:506
+>
+> #6  0x00007fdccb526187 in main_loop () at vl.c:1898
+>
+> #7  main (argc=<optimized out>, argv=<optimized out>, envp=<optimized 
+> out>) at vl.c:4709
+>
+> (gdb) p ioc->features
+>
+> $1 = 6
+>
+> (gdb) p ioc->name
+>
+> $2 = 0x7fdcce1b1ab0 "migration-socket-listener"
+>
+>
+> May be socket_accept_incoming_migration should 
+> call qio_channel_set_feature(ioc, QIO_CHANNEL_FEATURE_SHUTDOWN)??
+>
+>
+> thank you.
+>
+>
+>
+>
+>
+> 原始邮件
+> address@hidden
+> address@hidden
+> address@hidden@huawei.com>
+> *日 期 :*2017年03月16日 14:46
+> *主 题 :**Re: [Qemu-devel] COLO failover hang*
+>
+>
+>
+>
+> On 03/15/2017 05:06 PM, wangguang wrote:
+> >   am testing QEMU COLO feature described here [QEMU
+> > Wiki](
+http://wiki.qemu-project.org/Features/COLO
+).
+> >
+> > When the Primary Node panic,the Secondary Node qemu hang.
+> > hang at recvmsg in qio_channel_socket_readv.
+> > And  I run  { 'execute': 'nbd-server-stop' } and { "execute":
+> > "x-colo-lost-heartbeat" } in Secondary VM's
+> > monitor,the  Secondary Node qemu still hang at recvmsg .
+> >
+> > I found that the colo in qemu is not complete yet.
+> > Do the colo have any plan for development?
+>
+> Yes, We are developing. You can see some of patch we pushing.
+>
+> > Has anyone ever run it successfully? Any help is appreciated!
+>
+> In our internal version can run it successfully,
+> The failover detail you can ask Zhanghailiang for help.
+> Next time if you have some question about COLO,
+> please cc me and zhanghailiang address@hidden
+>
+>
+> Thanks
+> Zhang Chen
+>
+>
+> >
+> >
+> >
+> > centos7.2+qemu2.7.50
+> > (gdb) bt
+> > #0  0x00007f3e00cc86ad in recvmsg () from /lib64/libpthread.so.0
+> > #1  0x00007f3e0332b738 in qio_channel_socket_readv (ioc=<optimized out>,
+> > iov=<optimized out>, niov=<optimized out>, fds=0x0, nfds=0x0, errp=0x0) at
+> > io/channel-socket.c:497
+> > #2  0x00007f3e03329472 in qio_channel_read (address@hidden,
+> > address@hidden "", address@hidden,
+> > address@hidden) at io/channel.c:97
+> > #3  0x00007f3e032750e0 in channel_get_buffer (opaque=<optimized out>,
+> > buf=0x7f3e05910f38 "", pos=<optimized out>, size=32768) at
+> > migration/qemu-file-channel.c:78
+> > #4  0x00007f3e0327412c in qemu_fill_buffer (f=0x7f3e05910f00) at
+> > migration/qemu-file.c:257
+> > #5  0x00007f3e03274a41 in qemu_peek_byte (address@hidden,
+> > address@hidden) at migration/qemu-file.c:510
+> > #6  0x00007f3e03274aab in qemu_get_byte (address@hidden) at
+> > migration/qemu-file.c:523
+> > #7  0x00007f3e03274cb2 in qemu_get_be32 (address@hidden) at
+> > migration/qemu-file.c:603
+> > #8  0x00007f3e03271735 in colo_receive_message (f=0x7f3e05910f00,
+> > address@hidden) at migration/colo.c:215
+> > #9  0x00007f3e0327250d in colo_wait_handle_message (errp=0x7f3d62bfaa48,
+> > checkpoint_request=<synthetic pointer>, f=<optimized out>) at
+> > migration/colo.c:546
+> > #10 colo_process_incoming_thread (opaque=0x7f3e067245e0) at
+> > migration/colo.c:649
+> > #11 0x00007f3e00cc1df3 in start_thread () from /lib64/libpthread.so.0
+> > #12 0x00007f3dfc9c03ed in clone () from /lib64/libc.so.6
+> >
+> >
+> >
+> >
+> >
+> > --
+> > View this message in context:
+http://qemu.11.n7.nabble.com/COLO-failover-hang-tp473250.html
+> > Sent from the Developer mailing list archive at Nabble.com.
+> >
+> >
+> >
+> >
+>
+> -- 
+> Thanks
+> Zhang Chen
+>
+>
+>
+>
+>
+
+-- 
+Thanks
+Zhang Chen
+
+Hi,
+
+On 2017/3/21 16:10, address@hidden wrote:
+Thank you。
+
+I have test aready。
+
+When the Primary Node panic,the Secondary Node qemu hang at the same place。
+
+Incorrding
+http://wiki.qemu-project.org/Features/COLO
+,kill Primary Node qemu 
+will not produce the problem,but Primary Node panic can。
+
+I think due to the feature of channel does not support 
+QIO_CHANNEL_FEATURE_SHUTDOWN.
+Yes, you are right, when we do failover for primary/secondary VM, we will 
+shutdown the related
+fd in case it is stuck in the read/write fd.
+
+It seems that you didn't follow the above introduction exactly to do the test. 
+Could you
+share your test procedures ? Especially the commands used in the test.
+
+Thanks,
+Hailiang
+when failover,channel_shutdown could not shut down the channel.
+
+
+so the colo_process_incoming_thread will hang at recvmsg.
+
+
+I test a patch:
+
+
+diff --git a/migration/socket.c b/migration/socket.c
+
+
+index 13966f1..d65a0ea 100644
+
+
+--- a/migration/socket.c
+
+
++++ b/migration/socket.c
+
+
+@@ -147,8 +147,9 @@ static gboolean socket_accept_incoming_migration(QIOChannel 
+*ioc,
+
+
+      }
+
+
+
+
+
+      trace_migration_socket_incoming_accepted()
+
+
+
+
+
+      qio_channel_set_name(QIO_CHANNEL(sioc), "migration-socket-incoming")
+
+
++    qio_channel_set_feature(QIO_CHANNEL(sioc), QIO_CHANNEL_FEATURE_SHUTDOWN)
+
+
+      migration_channel_process_incoming(migrate_get_current(),
+
+
+                                         QIO_CHANNEL(sioc))
+
+
+      object_unref(OBJECT(sioc))
+
+
+
+
+My test will not hang any more.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+原始邮件
+
+
+
+发件人: address@hidden
+收件人:王广10165992 address@hidden
+抄送人: address@hidden address@hidden
+日 期 :2017年03月21日 15:58
+主 题 :Re: [Qemu-devel]  答复: Re:  [BUG]COLO failover hang
+
+
+
+
+
+Hi,Wang.
+
+You can test this branch:
+https://github.com/coloft/qemu/tree/colo-v5.1-developing-COLO-frame-v21-with-shared-disk
+and please follow wiki ensure your own configuration correctly.
+http://wiki.qemu-project.org/Features/COLO
+Thanks
+
+Zhang Chen
+
+
+On 03/21/2017 03:27 PM, address@hidden wrote:
+>
+> hi.
+>
+> I test the git qemu master have the same problem.
+>
+> (gdb) bt
+>
+> #0  qio_channel_socket_readv (ioc=0x7f65911b4e50, iov=0x7f64ef3fd880,
+> niov=1, fds=0x0, nfds=0x0, errp=0x0) at io/channel-socket.c:461
+>
+> #1  0x00007f658e4aa0c2 in qio_channel_read
+> (address@hidden, address@hidden "",
+> address@hidden, address@hidden) at io/channel.c:114
+>
+> #2  0x00007f658e3ea990 in channel_get_buffer (opaque=<optimized out>,
+> buf=0x7f65907cb838 "", pos=<optimized out>, size=32768) at
+> migration/qemu-file-channel.c:78
+>
+> #3  0x00007f658e3e97fc in qemu_fill_buffer (f=0x7f65907cb800) at
+> migration/qemu-file.c:295
+>
+> #4  0x00007f658e3ea2e1 in qemu_peek_byte (address@hidden,
+> address@hidden) at migration/qemu-file.c:555
+>
+> #5  0x00007f658e3ea34b in qemu_get_byte (address@hidden) at
+> migration/qemu-file.c:568
+>
+> #6  0x00007f658e3ea552 in qemu_get_be32 (address@hidden) at
+> migration/qemu-file.c:648
+>
+> #7  0x00007f658e3e66e5 in colo_receive_message (f=0x7f65907cb800,
+> address@hidden) at migration/colo.c:244
+>
+> #8  0x00007f658e3e681e in colo_receive_check_message (f=<optimized
+> out>, address@hidden,
+> address@hidden)
+>
+>     at migration/colo.c:264
+>
+> #9  0x00007f658e3e740e in colo_process_incoming_thread
+> (opaque=0x7f658eb30360 <mis_current.31286>) at migration/colo.c:577
+>
+> #10 0x00007f658be09df3 in start_thread () from /lib64/libpthread.so.0
+>
+> #11 0x00007f65881983ed in clone () from /lib64/libc.so.6
+>
+> (gdb) p ioc->name
+>
+> $2 = 0x7f658ff7d5c0 "migration-socket-incoming"
+>
+> (gdb) p ioc->features        Do not support QIO_CHANNEL_FEATURE_SHUTDOWN
+>
+> $3 = 0
+>
+>
+> (gdb) bt
+>
+> #0  socket_accept_incoming_migration (ioc=0x7fdcceeafa90,
+> condition=G_IO_IN, opaque=0x7fdcceeafa90) at migration/socket.c:137
+>
+> #1  0x00007fdcc6966350 in g_main_dispatch (context=<optimized out>) at
+> gmain.c:3054
+>
+> #2  g_main_context_dispatch (context=<optimized out>,
+> address@hidden) at gmain.c:3630
+>
+> #3  0x00007fdccb8a6dcc in glib_pollfds_poll () at util/main-loop.c:213
+>
+> #4  os_host_main_loop_wait (timeout=<optimized out>) at
+> util/main-loop.c:258
+>
+> #5  main_loop_wait (address@hidden) at
+> util/main-loop.c:506
+>
+> #6  0x00007fdccb526187 in main_loop () at vl.c:1898
+>
+> #7  main (argc=<optimized out>, argv=<optimized out>, envp=<optimized
+> out>) at vl.c:4709
+>
+> (gdb) p ioc->features
+>
+> $1 = 6
+>
+> (gdb) p ioc->name
+>
+> $2 = 0x7fdcce1b1ab0 "migration-socket-listener"
+>
+>
+> May be socket_accept_incoming_migration should
+> call qio_channel_set_feature(ioc, QIO_CHANNEL_FEATURE_SHUTDOWN)??
+>
+>
+> thank you.
+>
+>
+>
+>
+>
+> 原始邮件
+> address@hidden
+> address@hidden
+> address@hidden@huawei.com>
+> *日 期 :*2017年03月16日 14:46
+> *主 题 :**Re: [Qemu-devel] COLO failover hang*
+>
+>
+>
+>
+> On 03/15/2017 05:06 PM, wangguang wrote:
+> >   am testing QEMU COLO feature described here [QEMU
+> > Wiki](
+http://wiki.qemu-project.org/Features/COLO
+).
+> >
+> > When the Primary Node panic,the Secondary Node qemu hang.
+> > hang at recvmsg in qio_channel_socket_readv.
+> > And  I run  { 'execute': 'nbd-server-stop' } and { "execute":
+> > "x-colo-lost-heartbeat" } in Secondary VM's
+> > monitor,the  Secondary Node qemu still hang at recvmsg .
+> >
+> > I found that the colo in qemu is not complete yet.
+> > Do the colo have any plan for development?
+>
+> Yes, We are developing. You can see some of patch we pushing.
+>
+> > Has anyone ever run it successfully? Any help is appreciated!
+>
+> In our internal version can run it successfully,
+> The failover detail you can ask Zhanghailiang for help.
+> Next time if you have some question about COLO,
+> please cc me and zhanghailiang address@hidden
+>
+>
+> Thanks
+> Zhang Chen
+>
+>
+> >
+> >
+> >
+> > centos7.2+qemu2.7.50
+> > (gdb) bt
+> > #0  0x00007f3e00cc86ad in recvmsg () from /lib64/libpthread.so.0
+> > #1  0x00007f3e0332b738 in qio_channel_socket_readv (ioc=<optimized out>,
+> > iov=<optimized out>, niov=<optimized out>, fds=0x0, nfds=0x0, errp=0x0) at
+> > io/channel-socket.c:497
+> > #2  0x00007f3e03329472 in qio_channel_read (address@hidden,
+> > address@hidden "", address@hidden,
+> > address@hidden) at io/channel.c:97
+> > #3  0x00007f3e032750e0 in channel_get_buffer (opaque=<optimized out>,
+> > buf=0x7f3e05910f38 "", pos=<optimized out>, size=32768) at
+> > migration/qemu-file-channel.c:78
+> > #4  0x00007f3e0327412c in qemu_fill_buffer (f=0x7f3e05910f00) at
+> > migration/qemu-file.c:257
+> > #5  0x00007f3e03274a41 in qemu_peek_byte (address@hidden,
+> > address@hidden) at migration/qemu-file.c:510
+> > #6  0x00007f3e03274aab in qemu_get_byte (address@hidden) at
+> > migration/qemu-file.c:523
+> > #7  0x00007f3e03274cb2 in qemu_get_be32 (address@hidden) at
+> > migration/qemu-file.c:603
+> > #8  0x00007f3e03271735 in colo_receive_message (f=0x7f3e05910f00,
+> > address@hidden) at migration/colo.c:215
+> > #9  0x00007f3e0327250d in colo_wait_handle_message (errp=0x7f3d62bfaa48,
+> > checkpoint_request=<synthetic pointer>, f=<optimized out>) at
+> > migration/colo.c:546
+> > #10 colo_process_incoming_thread (opaque=0x7f3e067245e0) at
+> > migration/colo.c:649
+> > #11 0x00007f3e00cc1df3 in start_thread () from /lib64/libpthread.so.0
+> > #12 0x00007f3dfc9c03ed in clone () from /lib64/libc.so.6
+> >
+> >
+> >
+> >
+> >
+> > --
+> > View this message in context:
+http://qemu.11.n7.nabble.com/COLO-failover-hang-tp473250.html
+> > Sent from the Developer mailing list archive at Nabble.com.
+> >
+> >
+> >
+> >
+>
+> --
+> Thanks
+> Zhang Chen
+>
+>
+>
+>
+>
+
+Hi,
+
+Thanks for reporting this, and i confirmed it in my test, and it is a bug.
+
+Though we tried to call qemu_file_shutdown() to shutdown the related fd, in
+case COLO thread/incoming thread is stuck in read/write() while do failover,
+but it didn't take effect, because all the fd used by COLO (also migration)
+has been wrapped by qio channel, and it will not call the shutdown API if
+we didn't qio_channel_set_feature(QIO_CHANNEL(sioc), 
+QIO_CHANNEL_FEATURE_SHUTDOWN).
+
+Cc: Dr. David Alan Gilbert <address@hidden>
+
+I doubted migration cancel has the same problem, it may be stuck in write()
+if we tried to cancel migration.
+
+void fd_start_outgoing_migration(MigrationState *s, const char *fdname, Error 
+**errp)
+{
+    qio_channel_set_name(QIO_CHANNEL(ioc), "migration-fd-outgoing");
+    migration_channel_connect(s, ioc, NULL);
+    ... ...
+We didn't call qio_channel_set_feature(QIO_CHANNEL(sioc), 
+QIO_CHANNEL_FEATURE_SHUTDOWN) above,
+and the
+migrate_fd_cancel()
+{
+ ... ...
+    if (s->state == MIGRATION_STATUS_CANCELLING && f) {
+        qemu_file_shutdown(f);  --> This will not take effect. No ?
+    }
+}
+
+Thanks,
+Hailiang
+
+On 2017/3/21 16:10, address@hidden wrote:
+Thank you。
+
+I have test aready。
+
+When the Primary Node panic,the Secondary Node qemu hang at the same place。
+
+Incorrding
+http://wiki.qemu-project.org/Features/COLO
+,kill Primary Node qemu 
+will not produce the problem,but Primary Node panic can。
+
+I think due to the feature of channel does not support 
+QIO_CHANNEL_FEATURE_SHUTDOWN.
+
+
+when failover,channel_shutdown could not shut down the channel.
+
+
+so the colo_process_incoming_thread will hang at recvmsg.
+
+
+I test a patch:
+
+
+diff --git a/migration/socket.c b/migration/socket.c
+
+
+index 13966f1..d65a0ea 100644
+
+
+--- a/migration/socket.c
+
+
++++ b/migration/socket.c
+
+
+@@ -147,8 +147,9 @@ static gboolean socket_accept_incoming_migration(QIOChannel 
+*ioc,
+
+
+      }
+
+
+
+
+
+      trace_migration_socket_incoming_accepted()
+
+
+
+
+
+      qio_channel_set_name(QIO_CHANNEL(sioc), "migration-socket-incoming")
+
+
++    qio_channel_set_feature(QIO_CHANNEL(sioc), QIO_CHANNEL_FEATURE_SHUTDOWN)
+
+
+      migration_channel_process_incoming(migrate_get_current(),
+
+
+                                         QIO_CHANNEL(sioc))
+
+
+      object_unref(OBJECT(sioc))
+
+
+
+
+My test will not hang any more.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+原始邮件
+
+
+
+发件人: address@hidden
+收件人:王广10165992 address@hidden
+抄送人: address@hidden address@hidden
+日 期 :2017年03月21日 15:58
+主 题 :Re: [Qemu-devel]  答复: Re:  [BUG]COLO failover hang
+
+
+
+
+
+Hi,Wang.
+
+You can test this branch:
+https://github.com/coloft/qemu/tree/colo-v5.1-developing-COLO-frame-v21-with-shared-disk
+and please follow wiki ensure your own configuration correctly.
+http://wiki.qemu-project.org/Features/COLO
+Thanks
+
+Zhang Chen
+
+
+On 03/21/2017 03:27 PM, address@hidden wrote:
+>
+> hi.
+>
+> I test the git qemu master have the same problem.
+>
+> (gdb) bt
+>
+> #0  qio_channel_socket_readv (ioc=0x7f65911b4e50, iov=0x7f64ef3fd880,
+> niov=1, fds=0x0, nfds=0x0, errp=0x0) at io/channel-socket.c:461
+>
+> #1  0x00007f658e4aa0c2 in qio_channel_read
+> (address@hidden, address@hidden "",
+> address@hidden, address@hidden) at io/channel.c:114
+>
+> #2  0x00007f658e3ea990 in channel_get_buffer (opaque=<optimized out>,
+> buf=0x7f65907cb838 "", pos=<optimized out>, size=32768) at
+> migration/qemu-file-channel.c:78
+>
+> #3  0x00007f658e3e97fc in qemu_fill_buffer (f=0x7f65907cb800) at
+> migration/qemu-file.c:295
+>
+> #4  0x00007f658e3ea2e1 in qemu_peek_byte (address@hidden,
+> address@hidden) at migration/qemu-file.c:555
+>
+> #5  0x00007f658e3ea34b in qemu_get_byte (address@hidden) at
+> migration/qemu-file.c:568
+>
+> #6  0x00007f658e3ea552 in qemu_get_be32 (address@hidden) at
+> migration/qemu-file.c:648
+>
+> #7  0x00007f658e3e66e5 in colo_receive_message (f=0x7f65907cb800,
+> address@hidden) at migration/colo.c:244
+>
+> #8  0x00007f658e3e681e in colo_receive_check_message (f=<optimized
+> out>, address@hidden,
+> address@hidden)
+>
+>     at migration/colo.c:264
+>
+> #9  0x00007f658e3e740e in colo_process_incoming_thread
+> (opaque=0x7f658eb30360 <mis_current.31286>) at migration/colo.c:577
+>
+> #10 0x00007f658be09df3 in start_thread () from /lib64/libpthread.so.0
+>
+> #11 0x00007f65881983ed in clone () from /lib64/libc.so.6
+>
+> (gdb) p ioc->name
+>
+> $2 = 0x7f658ff7d5c0 "migration-socket-incoming"
+>
+> (gdb) p ioc->features        Do not support QIO_CHANNEL_FEATURE_SHUTDOWN
+>
+> $3 = 0
+>
+>
+> (gdb) bt
+>
+> #0  socket_accept_incoming_migration (ioc=0x7fdcceeafa90,
+> condition=G_IO_IN, opaque=0x7fdcceeafa90) at migration/socket.c:137
+>
+> #1  0x00007fdcc6966350 in g_main_dispatch (context=<optimized out>) at
+> gmain.c:3054
+>
+> #2  g_main_context_dispatch (context=<optimized out>,
+> address@hidden) at gmain.c:3630
+>
+> #3  0x00007fdccb8a6dcc in glib_pollfds_poll () at util/main-loop.c:213
+>
+> #4  os_host_main_loop_wait (timeout=<optimized out>) at
+> util/main-loop.c:258
+>
+> #5  main_loop_wait (address@hidden) at
+> util/main-loop.c:506
+>
+> #6  0x00007fdccb526187 in main_loop () at vl.c:1898
+>
+> #7  main (argc=<optimized out>, argv=<optimized out>, envp=<optimized
+> out>) at vl.c:4709
+>
+> (gdb) p ioc->features
+>
+> $1 = 6
+>
+> (gdb) p ioc->name
+>
+> $2 = 0x7fdcce1b1ab0 "migration-socket-listener"
+>
+>
+> May be socket_accept_incoming_migration should
+> call qio_channel_set_feature(ioc, QIO_CHANNEL_FEATURE_SHUTDOWN)??
+>
+>
+> thank you.
+>
+>
+>
+>
+>
+> 原始邮件
+> address@hidden
+> address@hidden
+> address@hidden@huawei.com>
+> *日 期 :*2017年03月16日 14:46
+> *主 题 :**Re: [Qemu-devel] COLO failover hang*
+>
+>
+>
+>
+> On 03/15/2017 05:06 PM, wangguang wrote:
+> >   am testing QEMU COLO feature described here [QEMU
+> > Wiki](
+http://wiki.qemu-project.org/Features/COLO
+).
+> >
+> > When the Primary Node panic,the Secondary Node qemu hang.
+> > hang at recvmsg in qio_channel_socket_readv.
+> > And  I run  { 'execute': 'nbd-server-stop' } and { "execute":
+> > "x-colo-lost-heartbeat" } in Secondary VM's
+> > monitor,the  Secondary Node qemu still hang at recvmsg .
+> >
+> > I found that the colo in qemu is not complete yet.
+> > Do the colo have any plan for development?
+>
+> Yes, We are developing. You can see some of patch we pushing.
+>
+> > Has anyone ever run it successfully? Any help is appreciated!
+>
+> In our internal version can run it successfully,
+> The failover detail you can ask Zhanghailiang for help.
+> Next time if you have some question about COLO,
+> please cc me and zhanghailiang address@hidden
+>
+>
+> Thanks
+> Zhang Chen
+>
+>
+> >
+> >
+> >
+> > centos7.2+qemu2.7.50
+> > (gdb) bt
+> > #0  0x00007f3e00cc86ad in recvmsg () from /lib64/libpthread.so.0
+> > #1  0x00007f3e0332b738 in qio_channel_socket_readv (ioc=<optimized out>,
+> > iov=<optimized out>, niov=<optimized out>, fds=0x0, nfds=0x0, errp=0x0) at
+> > io/channel-socket.c:497
+> > #2  0x00007f3e03329472 in qio_channel_read (address@hidden,
+> > address@hidden "", address@hidden,
+> > address@hidden) at io/channel.c:97
+> > #3  0x00007f3e032750e0 in channel_get_buffer (opaque=<optimized out>,
+> > buf=0x7f3e05910f38 "", pos=<optimized out>, size=32768) at
+> > migration/qemu-file-channel.c:78
+> > #4  0x00007f3e0327412c in qemu_fill_buffer (f=0x7f3e05910f00) at
+> > migration/qemu-file.c:257
+> > #5  0x00007f3e03274a41 in qemu_peek_byte (address@hidden,
+> > address@hidden) at migration/qemu-file.c:510
+> > #6  0x00007f3e03274aab in qemu_get_byte (address@hidden) at
+> > migration/qemu-file.c:523
+> > #7  0x00007f3e03274cb2 in qemu_get_be32 (address@hidden) at
+> > migration/qemu-file.c:603
+> > #8  0x00007f3e03271735 in colo_receive_message (f=0x7f3e05910f00,
+> > address@hidden) at migration/colo.c:215
+> > #9  0x00007f3e0327250d in colo_wait_handle_message (errp=0x7f3d62bfaa48,
+> > checkpoint_request=<synthetic pointer>, f=<optimized out>) at
+> > migration/colo.c:546
+> > #10 colo_process_incoming_thread (opaque=0x7f3e067245e0) at
+> > migration/colo.c:649
+> > #11 0x00007f3e00cc1df3 in start_thread () from /lib64/libpthread.so.0
+> > #12 0x00007f3dfc9c03ed in clone () from /lib64/libc.so.6
+> >
+> >
+> >
+> >
+> >
+> > --
+> > View this message in context:
+http://qemu.11.n7.nabble.com/COLO-failover-hang-tp473250.html
+> > Sent from the Developer mailing list archive at Nabble.com.
+> >
+> >
+> >
+> >
+>
+> --
+> Thanks
+> Zhang Chen
+>
+>
+>
+>
+>
+
+* Hailiang Zhang (address@hidden) wrote:
+>
+Hi,
+>
+>
+Thanks for reporting this, and i confirmed it in my test, and it is a bug.
+>
+>
+Though we tried to call qemu_file_shutdown() to shutdown the related fd, in
+>
+case COLO thread/incoming thread is stuck in read/write() while do failover,
+>
+but it didn't take effect, because all the fd used by COLO (also migration)
+>
+has been wrapped by qio channel, and it will not call the shutdown API if
+>
+we didn't qio_channel_set_feature(QIO_CHANNEL(sioc),
+>
+QIO_CHANNEL_FEATURE_SHUTDOWN).
+>
+>
+Cc: Dr. David Alan Gilbert <address@hidden>
+>
+>
+I doubted migration cancel has the same problem, it may be stuck in write()
+>
+if we tried to cancel migration.
+>
+>
+void fd_start_outgoing_migration(MigrationState *s, const char *fdname, Error
+>
+**errp)
+>
+{
+>
+qio_channel_set_name(QIO_CHANNEL(ioc), "migration-fd-outgoing");
+>
+migration_channel_connect(s, ioc, NULL);
+>
+... ...
+>
+We didn't call qio_channel_set_feature(QIO_CHANNEL(sioc),
+>
+QIO_CHANNEL_FEATURE_SHUTDOWN) above,
+>
+and the
+>
+migrate_fd_cancel()
+>
+{
+>
+... ...
+>
+if (s->state == MIGRATION_STATUS_CANCELLING && f) {
+>
+qemu_file_shutdown(f);  --> This will not take effect. No ?
+>
+}
+>
+}
+(cc'd in Daniel Berrange).
+I see that we call qio_channel_set_feature(ioc, QIO_CHANNEL_FEATURE_SHUTDOWN); 
+at the
+top of qio_channel_socket_new;  so I think that's safe isn't it?
+
+Dave
+
+>
+Thanks,
+>
+Hailiang
+>
+>
+On 2017/3/21 16:10, address@hidden wrote:
+>
+> Thank you。
+>
+>
+>
+> I have test aready。
+>
+>
+>
+> When the Primary Node panic,the Secondary Node qemu hang at the same place。
+>
+>
+>
+> Incorrding
+http://wiki.qemu-project.org/Features/COLO
+,kill Primary Node
+>
+> qemu will not produce the problem,but Primary Node panic can。
+>
+>
+>
+> I think due to the feature of channel does not support
+>
+> QIO_CHANNEL_FEATURE_SHUTDOWN.
+>
+>
+>
+>
+>
+> when failover,channel_shutdown could not shut down the channel.
+>
+>
+>
+>
+>
+> so the colo_process_incoming_thread will hang at recvmsg.
+>
+>
+>
+>
+>
+> I test a patch:
+>
+>
+>
+>
+>
+> diff --git a/migration/socket.c b/migration/socket.c
+>
+>
+>
+>
+>
+> index 13966f1..d65a0ea 100644
+>
+>
+>
+>
+>
+> --- a/migration/socket.c
+>
+>
+>
+>
+>
+> +++ b/migration/socket.c
+>
+>
+>
+>
+>
+> @@ -147,8 +147,9 @@ static gboolean
+>
+> socket_accept_incoming_migration(QIOChannel *ioc,
+>
+>
+>
+>
+>
+>       }
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>       trace_migration_socket_incoming_accepted()
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>       qio_channel_set_name(QIO_CHANNEL(sioc), "migration-socket-incoming")
+>
+>
+>
+>
+>
+> +    qio_channel_set_feature(QIO_CHANNEL(sioc),
+>
+> QIO_CHANNEL_FEATURE_SHUTDOWN)
+>
+>
+>
+>
+>
+>       migration_channel_process_incoming(migrate_get_current(),
+>
+>
+>
+>
+>
+>                                          QIO_CHANNEL(sioc))
+>
+>
+>
+>
+>
+>       object_unref(OBJECT(sioc))
+>
+>
+>
+>
+>
+>
+>
+>
+>
+> My test will not hang any more.
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+> 原始邮件
+>
+>
+>
+>
+>
+>
+>
+> 发件人: address@hidden
+>
+> 收件人:王广10165992 address@hidden
+>
+> 抄送人: address@hidden address@hidden
+>
+> 日 期 :2017年03月21日 15:58
+>
+> 主 题 :Re: [Qemu-devel]  答复: Re:  [BUG]COLO failover hang
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+>
+> Hi,Wang.
+>
+>
+>
+> You can test this branch:
+>
+>
+>
+>
+https://github.com/coloft/qemu/tree/colo-v5.1-developing-COLO-frame-v21-with-shared-disk
+>
+>
+>
+> and please follow wiki ensure your own configuration correctly.
+>
+>
+>
+>
+http://wiki.qemu-project.org/Features/COLO
+>
+>
+>
+>
+>
+> Thanks
+>
+>
+>
+> Zhang Chen
+>
+>
+>
+>
+>
+> On 03/21/2017 03:27 PM, address@hidden wrote:
+>
+> >
+>
+> > hi.
+>
+> >
+>
+> > I test the git qemu master have the same problem.
+>
+> >
+>
+> > (gdb) bt
+>
+> >
+>
+> > #0  qio_channel_socket_readv (ioc=0x7f65911b4e50, iov=0x7f64ef3fd880,
+>
+> > niov=1, fds=0x0, nfds=0x0, errp=0x0) at io/channel-socket.c:461
+>
+> >
+>
+> > #1  0x00007f658e4aa0c2 in qio_channel_read
+>
+> > (address@hidden, address@hidden "",
+>
+> > address@hidden, address@hidden) at io/channel.c:114
+>
+> >
+>
+> > #2  0x00007f658e3ea990 in channel_get_buffer (opaque=<optimized out>,
+>
+> > buf=0x7f65907cb838 "", pos=<optimized out>, size=32768) at
+>
+> > migration/qemu-file-channel.c:78
+>
+> >
+>
+> > #3  0x00007f658e3e97fc in qemu_fill_buffer (f=0x7f65907cb800) at
+>
+> > migration/qemu-file.c:295
+>
+> >
+>
+> > #4  0x00007f658e3ea2e1 in qemu_peek_byte (address@hidden,
+>
+> > address@hidden) at migration/qemu-file.c:555
+>
+> >
+>
+> > #5  0x00007f658e3ea34b in qemu_get_byte (address@hidden) at
+>
+> > migration/qemu-file.c:568
+>
+> >
+>
+> > #6  0x00007f658e3ea552 in qemu_get_be32 (address@hidden) at
+>
+> > migration/qemu-file.c:648
+>
+> >
+>
+> > #7  0x00007f658e3e66e5 in colo_receive_message (f=0x7f65907cb800,
+>
+> > address@hidden) at migration/colo.c:244
+>
+> >
+>
+> > #8  0x00007f658e3e681e in colo_receive_check_message (f=<optimized
+>
+> > out>, address@hidden,
+>
+> > address@hidden)
+>
+> >
+>
+> >     at migration/colo.c:264
+>
+> >
+>
+> > #9  0x00007f658e3e740e in colo_process_incoming_thread
+>
+> > (opaque=0x7f658eb30360 <mis_current.31286>) at migration/colo.c:577
+>
+> >
+>
+> > #10 0x00007f658be09df3 in start_thread () from /lib64/libpthread.so.0
+>
+> >
+>
+> > #11 0x00007f65881983ed in clone () from /lib64/libc.so.6
+>
+> >
+>
+> > (gdb) p ioc->name
+>
+> >
+>
+> > $2 = 0x7f658ff7d5c0 "migration-socket-incoming"
+>
+> >
+>
+> > (gdb) p ioc->features        Do not support QIO_CHANNEL_FEATURE_SHUTDOWN
+>
+> >
+>
+> > $3 = 0
+>
+> >
+>
+> >
+>
+> > (gdb) bt
+>
+> >
+>
+> > #0  socket_accept_incoming_migration (ioc=0x7fdcceeafa90,
+>
+> > condition=G_IO_IN, opaque=0x7fdcceeafa90) at migration/socket.c:137
+>
+> >
+>
+> > #1  0x00007fdcc6966350 in g_main_dispatch (context=<optimized out>) at
+>
+> > gmain.c:3054
+>
+> >
+>
+> > #2  g_main_context_dispatch (context=<optimized out>,
+>
+> > address@hidden) at gmain.c:3630
+>
+> >
+>
+> > #3  0x00007fdccb8a6dcc in glib_pollfds_poll () at util/main-loop.c:213
+>
+> >
+>
+> > #4  os_host_main_loop_wait (timeout=<optimized out>) at
+>
+> > util/main-loop.c:258
+>
+> >
+>
+> > #5  main_loop_wait (address@hidden) at
+>
+> > util/main-loop.c:506
+>
+> >
+>
+> > #6  0x00007fdccb526187 in main_loop () at vl.c:1898
+>
+> >
+>
+> > #7  main (argc=<optimized out>, argv=<optimized out>, envp=<optimized
+>
+> > out>) at vl.c:4709
+>
+> >
+>
+> > (gdb) p ioc->features
+>
+> >
+>
+> > $1 = 6
+>
+> >
+>
+> > (gdb) p ioc->name
+>
+> >
+>
+> > $2 = 0x7fdcce1b1ab0 "migration-socket-listener"
+>
+> >
+>
+> >
+>
+> > May be socket_accept_incoming_migration should
+>
+> > call qio_channel_set_feature(ioc, QIO_CHANNEL_FEATURE_SHUTDOWN)??
+>
+> >
+>
+> >
+>
+> > thank you.
+>
+> >
+>
+> >
+>
+> >
+>
+> >
+>
+> >
+>
+> > 原始邮件
+>
+> > address@hidden
+>
+> > address@hidden
+>
+> > address@hidden@huawei.com>
+>
+> > *日 期 :*2017年03月16日 14:46
+>
+> > *主 题 :**Re: [Qemu-devel] COLO failover hang*
+>
+> >
+>
+> >
+>
+> >
+>
+> >
+>
+> > On 03/15/2017 05:06 PM, wangguang wrote:
+>
+> > >   am testing QEMU COLO feature described here [QEMU
+>
+> > > Wiki](
+http://wiki.qemu-project.org/Features/COLO
+).
+>
+> > >
+>
+> > > When the Primary Node panic,the Secondary Node qemu hang.
+>
+> > > hang at recvmsg in qio_channel_socket_readv.
+>
+> > > And  I run  { 'execute': 'nbd-server-stop' } and { "execute":
+>
+> > > "x-colo-lost-heartbeat" } in Secondary VM's
+>
+> > > monitor,the  Secondary Node qemu still hang at recvmsg .
+>
+> > >
+>
+> > > I found that the colo in qemu is not complete yet.
+>
+> > > Do the colo have any plan for development?
+>
+> >
+>
+> > Yes, We are developing. You can see some of patch we pushing.
+>
+> >
+>
+> > > Has anyone ever run it successfully? Any help is appreciated!
+>
+> >
+>
+> > In our internal version can run it successfully,
+>
+> > The failover detail you can ask Zhanghailiang for help.
+>
+> > Next time if you have some question about COLO,
+>
+> > please cc me and zhanghailiang address@hidden
+>
+> >
+>
+> >
+>
+> > Thanks
+>
+> > Zhang Chen
+>
+> >
+>
+> >
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > > centos7.2+qemu2.7.50
+>
+> > > (gdb) bt
+>
+> > > #0  0x00007f3e00cc86ad in recvmsg () from /lib64/libpthread.so.0
+>
+> > > #1  0x00007f3e0332b738 in qio_channel_socket_readv (ioc=<optimized out>,
+>
+> > > iov=<optimized out>, niov=<optimized out>, fds=0x0, nfds=0x0, errp=0x0)
+>
+> at
+>
+> > > io/channel-socket.c:497
+>
+> > > #2  0x00007f3e03329472 in qio_channel_read (address@hidden,
+>
+> > > address@hidden "", address@hidden,
+>
+> > > address@hidden) at io/channel.c:97
+>
+> > > #3  0x00007f3e032750e0 in channel_get_buffer (opaque=<optimized out>,
+>
+> > > buf=0x7f3e05910f38 "", pos=<optimized out>, size=32768) at
+>
+> > > migration/qemu-file-channel.c:78
+>
+> > > #4  0x00007f3e0327412c in qemu_fill_buffer (f=0x7f3e05910f00) at
+>
+> > > migration/qemu-file.c:257
+>
+> > > #5  0x00007f3e03274a41 in qemu_peek_byte (address@hidden,
+>
+> > > address@hidden) at migration/qemu-file.c:510
+>
+> > > #6  0x00007f3e03274aab in qemu_get_byte (address@hidden) at
+>
+> > > migration/qemu-file.c:523
+>
+> > > #7  0x00007f3e03274cb2 in qemu_get_be32 (address@hidden) at
+>
+> > > migration/qemu-file.c:603
+>
+> > > #8  0x00007f3e03271735 in colo_receive_message (f=0x7f3e05910f00,
+>
+> > > address@hidden) at migration/colo.c:215
+>
+> > > #9  0x00007f3e0327250d in colo_wait_handle_message (errp=0x7f3d62bfaa48,
+>
+> > > checkpoint_request=<synthetic pointer>, f=<optimized out>) at
+>
+> > > migration/colo.c:546
+>
+> > > #10 colo_process_incoming_thread (opaque=0x7f3e067245e0) at
+>
+> > > migration/colo.c:649
+>
+> > > #11 0x00007f3e00cc1df3 in start_thread () from /lib64/libpthread.so.0
+>
+> > > #12 0x00007f3dfc9c03ed in clone () from /lib64/libc.so.6
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > > --
+>
+> > > View this message in context:
+>
+>
+http://qemu.11.n7.nabble.com/COLO-failover-hang-tp473250.html
+>
+> > > Sent from the Developer mailing list archive at Nabble.com.
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> >
+>
+> > --
+>
+> > Thanks
+>
+> > Zhang Chen
+>
+> >
+>
+> >
+>
+> >
+>
+> >
+>
+> >
+>
+>
+>
+--
+Dr. David Alan Gilbert / address@hidden / Manchester, UK
+
+On 2017/3/21 19:56, Dr. David Alan Gilbert wrote:
+* Hailiang Zhang (address@hidden) wrote:
+Hi,
+
+Thanks for reporting this, and i confirmed it in my test, and it is a bug.
+
+Though we tried to call qemu_file_shutdown() to shutdown the related fd, in
+case COLO thread/incoming thread is stuck in read/write() while do failover,
+but it didn't take effect, because all the fd used by COLO (also migration)
+has been wrapped by qio channel, and it will not call the shutdown API if
+we didn't qio_channel_set_feature(QIO_CHANNEL(sioc), 
+QIO_CHANNEL_FEATURE_SHUTDOWN).
+
+Cc: Dr. David Alan Gilbert <address@hidden>
+
+I doubted migration cancel has the same problem, it may be stuck in write()
+if we tried to cancel migration.
+
+void fd_start_outgoing_migration(MigrationState *s, const char *fdname, Error 
+**errp)
+{
+     qio_channel_set_name(QIO_CHANNEL(ioc), "migration-fd-outgoing");
+     migration_channel_connect(s, ioc, NULL);
+     ... ...
+We didn't call qio_channel_set_feature(QIO_CHANNEL(sioc), 
+QIO_CHANNEL_FEATURE_SHUTDOWN) above,
+and the
+migrate_fd_cancel()
+{
+  ... ...
+     if (s->state == MIGRATION_STATUS_CANCELLING && f) {
+         qemu_file_shutdown(f);  --> This will not take effect. No ?
+     }
+}
+(cc'd in Daniel Berrange).
+I see that we call qio_channel_set_feature(ioc, QIO_CHANNEL_FEATURE_SHUTDOWN); 
+at the
+top of qio_channel_socket_new;  so I think that's safe isn't it?
+Hmm, you are right, this problem is only exist for the migration incoming fd, 
+thanks.
+Dave
+Thanks,
+Hailiang
+
+On 2017/3/21 16:10, address@hidden wrote:
+Thank you。
+
+I have test aready。
+
+When the Primary Node panic,the Secondary Node qemu hang at the same place。
+
+Incorrding
+http://wiki.qemu-project.org/Features/COLO
+,kill Primary Node qemu 
+will not produce the problem,but Primary Node panic can。
+
+I think due to the feature of channel does not support 
+QIO_CHANNEL_FEATURE_SHUTDOWN.
+
+
+when failover,channel_shutdown could not shut down the channel.
+
+
+so the colo_process_incoming_thread will hang at recvmsg.
+
+
+I test a patch:
+
+
+diff --git a/migration/socket.c b/migration/socket.c
+
+
+index 13966f1..d65a0ea 100644
+
+
+--- a/migration/socket.c
+
+
++++ b/migration/socket.c
+
+
+@@ -147,8 +147,9 @@ static gboolean socket_accept_incoming_migration(QIOChannel 
+*ioc,
+
+
+       }
+
+
+
+
+
+       trace_migration_socket_incoming_accepted()
+
+
+
+
+
+       qio_channel_set_name(QIO_CHANNEL(sioc), "migration-socket-incoming")
+
+
++    qio_channel_set_feature(QIO_CHANNEL(sioc), QIO_CHANNEL_FEATURE_SHUTDOWN)
+
+
+       migration_channel_process_incoming(migrate_get_current(),
+
+
+                                          QIO_CHANNEL(sioc))
+
+
+       object_unref(OBJECT(sioc))
+
+
+
+
+My test will not hang any more.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+原始邮件
+
+
+
+发件人: address@hidden
+收件人:王广10165992 address@hidden
+抄送人: address@hidden address@hidden
+日 期 :2017年03月21日 15:58
+主 题 :Re: [Qemu-devel]  答复: Re:  [BUG]COLO failover hang
+
+
+
+
+
+Hi,Wang.
+
+You can test this branch:
+https://github.com/coloft/qemu/tree/colo-v5.1-developing-COLO-frame-v21-with-shared-disk
+and please follow wiki ensure your own configuration correctly.
+http://wiki.qemu-project.org/Features/COLO
+Thanks
+
+Zhang Chen
+
+
+On 03/21/2017 03:27 PM, address@hidden wrote:
+>
+> hi.
+>
+> I test the git qemu master have the same problem.
+>
+> (gdb) bt
+>
+> #0  qio_channel_socket_readv (ioc=0x7f65911b4e50, iov=0x7f64ef3fd880,
+> niov=1, fds=0x0, nfds=0x0, errp=0x0) at io/channel-socket.c:461
+>
+> #1  0x00007f658e4aa0c2 in qio_channel_read
+> (address@hidden, address@hidden "",
+> address@hidden, address@hidden) at io/channel.c:114
+>
+> #2  0x00007f658e3ea990 in channel_get_buffer (opaque=<optimized out>,
+> buf=0x7f65907cb838 "", pos=<optimized out>, size=32768) at
+> migration/qemu-file-channel.c:78
+>
+> #3  0x00007f658e3e97fc in qemu_fill_buffer (f=0x7f65907cb800) at
+> migration/qemu-file.c:295
+>
+> #4  0x00007f658e3ea2e1 in qemu_peek_byte (address@hidden,
+> address@hidden) at migration/qemu-file.c:555
+>
+> #5  0x00007f658e3ea34b in qemu_get_byte (address@hidden) at
+> migration/qemu-file.c:568
+>
+> #6  0x00007f658e3ea552 in qemu_get_be32 (address@hidden) at
+> migration/qemu-file.c:648
+>
+> #7  0x00007f658e3e66e5 in colo_receive_message (f=0x7f65907cb800,
+> address@hidden) at migration/colo.c:244
+>
+> #8  0x00007f658e3e681e in colo_receive_check_message (f=<optimized
+> out>, address@hidden,
+> address@hidden)
+>
+>     at migration/colo.c:264
+>
+> #9  0x00007f658e3e740e in colo_process_incoming_thread
+> (opaque=0x7f658eb30360 <mis_current.31286>) at migration/colo.c:577
+>
+> #10 0x00007f658be09df3 in start_thread () from /lib64/libpthread.so.0
+>
+> #11 0x00007f65881983ed in clone () from /lib64/libc.so.6
+>
+> (gdb) p ioc->name
+>
+> $2 = 0x7f658ff7d5c0 "migration-socket-incoming"
+>
+> (gdb) p ioc->features        Do not support QIO_CHANNEL_FEATURE_SHUTDOWN
+>
+> $3 = 0
+>
+>
+> (gdb) bt
+>
+> #0  socket_accept_incoming_migration (ioc=0x7fdcceeafa90,
+> condition=G_IO_IN, opaque=0x7fdcceeafa90) at migration/socket.c:137
+>
+> #1  0x00007fdcc6966350 in g_main_dispatch (context=<optimized out>) at
+> gmain.c:3054
+>
+> #2  g_main_context_dispatch (context=<optimized out>,
+> address@hidden) at gmain.c:3630
+>
+> #3  0x00007fdccb8a6dcc in glib_pollfds_poll () at util/main-loop.c:213
+>
+> #4  os_host_main_loop_wait (timeout=<optimized out>) at
+> util/main-loop.c:258
+>
+> #5  main_loop_wait (address@hidden) at
+> util/main-loop.c:506
+>
+> #6  0x00007fdccb526187 in main_loop () at vl.c:1898
+>
+> #7  main (argc=<optimized out>, argv=<optimized out>, envp=<optimized
+> out>) at vl.c:4709
+>
+> (gdb) p ioc->features
+>
+> $1 = 6
+>
+> (gdb) p ioc->name
+>
+> $2 = 0x7fdcce1b1ab0 "migration-socket-listener"
+>
+>
+> May be socket_accept_incoming_migration should
+> call qio_channel_set_feature(ioc, QIO_CHANNEL_FEATURE_SHUTDOWN)??
+>
+>
+> thank you.
+>
+>
+>
+>
+>
+> 原始邮件
+> address@hidden
+> address@hidden
+> address@hidden@huawei.com>
+> *日 期 :*2017年03月16日 14:46
+> *主 题 :**Re: [Qemu-devel] COLO failover hang*
+>
+>
+>
+>
+> On 03/15/2017 05:06 PM, wangguang wrote:
+> >   am testing QEMU COLO feature described here [QEMU
+> > Wiki](
+http://wiki.qemu-project.org/Features/COLO
+).
+> >
+> > When the Primary Node panic,the Secondary Node qemu hang.
+> > hang at recvmsg in qio_channel_socket_readv.
+> > And  I run  { 'execute': 'nbd-server-stop' } and { "execute":
+> > "x-colo-lost-heartbeat" } in Secondary VM's
+> > monitor,the  Secondary Node qemu still hang at recvmsg .
+> >
+> > I found that the colo in qemu is not complete yet.
+> > Do the colo have any plan for development?
+>
+> Yes, We are developing. You can see some of patch we pushing.
+>
+> > Has anyone ever run it successfully? Any help is appreciated!
+>
+> In our internal version can run it successfully,
+> The failover detail you can ask Zhanghailiang for help.
+> Next time if you have some question about COLO,
+> please cc me and zhanghailiang address@hidden
+>
+>
+> Thanks
+> Zhang Chen
+>
+>
+> >
+> >
+> >
+> > centos7.2+qemu2.7.50
+> > (gdb) bt
+> > #0  0x00007f3e00cc86ad in recvmsg () from /lib64/libpthread.so.0
+> > #1  0x00007f3e0332b738 in qio_channel_socket_readv (ioc=<optimized out>,
+> > iov=<optimized out>, niov=<optimized out>, fds=0x0, nfds=0x0, errp=0x0) at
+> > io/channel-socket.c:497
+> > #2  0x00007f3e03329472 in qio_channel_read (address@hidden,
+> > address@hidden "", address@hidden,
+> > address@hidden) at io/channel.c:97
+> > #3  0x00007f3e032750e0 in channel_get_buffer (opaque=<optimized out>,
+> > buf=0x7f3e05910f38 "", pos=<optimized out>, size=32768) at
+> > migration/qemu-file-channel.c:78
+> > #4  0x00007f3e0327412c in qemu_fill_buffer (f=0x7f3e05910f00) at
+> > migration/qemu-file.c:257
+> > #5  0x00007f3e03274a41 in qemu_peek_byte (address@hidden,
+> > address@hidden) at migration/qemu-file.c:510
+> > #6  0x00007f3e03274aab in qemu_get_byte (address@hidden) at
+> > migration/qemu-file.c:523
+> > #7  0x00007f3e03274cb2 in qemu_get_be32 (address@hidden) at
+> > migration/qemu-file.c:603
+> > #8  0x00007f3e03271735 in colo_receive_message (f=0x7f3e05910f00,
+> > address@hidden) at migration/colo.c:215
+> > #9  0x00007f3e0327250d in colo_wait_handle_message (errp=0x7f3d62bfaa48,
+> > checkpoint_request=<synthetic pointer>, f=<optimized out>) at
+> > migration/colo.c:546
+> > #10 colo_process_incoming_thread (opaque=0x7f3e067245e0) at
+> > migration/colo.c:649
+> > #11 0x00007f3e00cc1df3 in start_thread () from /lib64/libpthread.so.0
+> > #12 0x00007f3dfc9c03ed in clone () from /lib64/libc.so.6
+> >
+> >
+> >
+> >
+> >
+> > --
+> > View this message in context:
+http://qemu.11.n7.nabble.com/COLO-failover-hang-tp473250.html
+> > Sent from the Developer mailing list archive at Nabble.com.
+> >
+> >
+> >
+> >
+>
+> --
+> Thanks
+> Zhang Chen
+>
+>
+>
+>
+>
+--
+Dr. David Alan Gilbert / address@hidden / Manchester, UK
+
+.
+
+* Hailiang Zhang (address@hidden) wrote:
+>
+On 2017/3/21 19:56, Dr. David Alan Gilbert wrote:
+>
+> * Hailiang Zhang (address@hidden) wrote:
+>
+> > Hi,
+>
+> >
+>
+> > Thanks for reporting this, and i confirmed it in my test, and it is a bug.
+>
+> >
+>
+> > Though we tried to call qemu_file_shutdown() to shutdown the related fd,
+>
+> > in
+>
+> > case COLO thread/incoming thread is stuck in read/write() while do
+>
+> > failover,
+>
+> > but it didn't take effect, because all the fd used by COLO (also
+>
+> > migration)
+>
+> > has been wrapped by qio channel, and it will not call the shutdown API if
+>
+> > we didn't qio_channel_set_feature(QIO_CHANNEL(sioc),
+>
+> > QIO_CHANNEL_FEATURE_SHUTDOWN).
+>
+> >
+>
+> > Cc: Dr. David Alan Gilbert <address@hidden>
+>
+> >
+>
+> > I doubted migration cancel has the same problem, it may be stuck in
+>
+> > write()
+>
+> > if we tried to cancel migration.
+>
+> >
+>
+> > void fd_start_outgoing_migration(MigrationState *s, const char *fdname,
+>
+> > Error **errp)
+>
+> > {
+>
+> >      qio_channel_set_name(QIO_CHANNEL(ioc), "migration-fd-outgoing");
+>
+> >      migration_channel_connect(s, ioc, NULL);
+>
+> >      ... ...
+>
+> > We didn't call qio_channel_set_feature(QIO_CHANNEL(sioc),
+>
+> > QIO_CHANNEL_FEATURE_SHUTDOWN) above,
+>
+> > and the
+>
+> > migrate_fd_cancel()
+>
+> > {
+>
+> >   ... ...
+>
+> >      if (s->state == MIGRATION_STATUS_CANCELLING && f) {
+>
+> >          qemu_file_shutdown(f);  --> This will not take effect. No ?
+>
+> >      }
+>
+> > }
+>
+>
+>
+> (cc'd in Daniel Berrange).
+>
+> I see that we call qio_channel_set_feature(ioc,
+>
+> QIO_CHANNEL_FEATURE_SHUTDOWN); at the
+>
+> top of qio_channel_socket_new;  so I think that's safe isn't it?
+>
+>
+>
+>
+Hmm, you are right, this problem is only exist for the migration incoming fd,
+>
+thanks.
+Yes, and I don't think we normally do a cancel on the incoming side of a 
+migration.
+
+Dave
+
+>
+> Dave
+>
+>
+>
+> > Thanks,
+>
+> > Hailiang
+>
+> >
+>
+> > On 2017/3/21 16:10, address@hidden wrote:
+>
+> > > Thank you。
+>
+> > >
+>
+> > > I have test aready。
+>
+> > >
+>
+> > > When the Primary Node panic,the Secondary Node qemu hang at the same
+>
+> > > place。
+>
+> > >
+>
+> > > Incorrding
+http://wiki.qemu-project.org/Features/COLO
+,kill Primary
+>
+> > > Node qemu will not produce the problem,but Primary Node panic can。
+>
+> > >
+>
+> > > I think due to the feature of channel does not support
+>
+> > > QIO_CHANNEL_FEATURE_SHUTDOWN.
+>
+> > >
+>
+> > >
+>
+> > > when failover,channel_shutdown could not shut down the channel.
+>
+> > >
+>
+> > >
+>
+> > > so the colo_process_incoming_thread will hang at recvmsg.
+>
+> > >
+>
+> > >
+>
+> > > I test a patch:
+>
+> > >
+>
+> > >
+>
+> > > diff --git a/migration/socket.c b/migration/socket.c
+>
+> > >
+>
+> > >
+>
+> > > index 13966f1..d65a0ea 100644
+>
+> > >
+>
+> > >
+>
+> > > --- a/migration/socket.c
+>
+> > >
+>
+> > >
+>
+> > > +++ b/migration/socket.c
+>
+> > >
+>
+> > >
+>
+> > > @@ -147,8 +147,9 @@ static gboolean
+>
+> > > socket_accept_incoming_migration(QIOChannel *ioc,
+>
+> > >
+>
+> > >
+>
+> > >        }
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > >        trace_migration_socket_incoming_accepted()
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > >        qio_channel_set_name(QIO_CHANNEL(sioc),
+>
+> > > "migration-socket-incoming")
+>
+> > >
+>
+> > >
+>
+> > > +    qio_channel_set_feature(QIO_CHANNEL(sioc),
+>
+> > > QIO_CHANNEL_FEATURE_SHUTDOWN)
+>
+> > >
+>
+> > >
+>
+> > >        migration_channel_process_incoming(migrate_get_current(),
+>
+> > >
+>
+> > >
+>
+> > >                                           QIO_CHANNEL(sioc))
+>
+> > >
+>
+> > >
+>
+> > >        object_unref(OBJECT(sioc))
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > > My test will not hang any more.
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > > 原始邮件
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > > 发件人: address@hidden
+>
+> > > 收件人:王广10165992 address@hidden
+>
+> > > 抄送人: address@hidden address@hidden
+>
+> > > 日 期 :2017年03月21日 15:58
+>
+> > > 主 题 :Re: [Qemu-devel]  答复: Re:  [BUG]COLO failover hang
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > > Hi,Wang.
+>
+> > >
+>
+> > > You can test this branch:
+>
+> > >
+>
+> > >
+https://github.com/coloft/qemu/tree/colo-v5.1-developing-COLO-frame-v21-with-shared-disk
+>
+> > >
+>
+> > > and please follow wiki ensure your own configuration correctly.
+>
+> > >
+>
+> > >
+http://wiki.qemu-project.org/Features/COLO
+>
+> > >
+>
+> > >
+>
+> > > Thanks
+>
+> > >
+>
+> > > Zhang Chen
+>
+> > >
+>
+> > >
+>
+> > > On 03/21/2017 03:27 PM, address@hidden wrote:
+>
+> > > >
+>
+> > > > hi.
+>
+> > > >
+>
+> > > > I test the git qemu master have the same problem.
+>
+> > > >
+>
+> > > > (gdb) bt
+>
+> > > >
+>
+> > > > #0  qio_channel_socket_readv (ioc=0x7f65911b4e50, iov=0x7f64ef3fd880,
+>
+> > > > niov=1, fds=0x0, nfds=0x0, errp=0x0) at io/channel-socket.c:461
+>
+> > > >
+>
+> > > > #1  0x00007f658e4aa0c2 in qio_channel_read
+>
+> > > > (address@hidden, address@hidden "",
+>
+> > > > address@hidden, address@hidden) at io/channel.c:114
+>
+> > > >
+>
+> > > > #2  0x00007f658e3ea990 in channel_get_buffer (opaque=<optimized out>,
+>
+> > > > buf=0x7f65907cb838 "", pos=<optimized out>, size=32768) at
+>
+> > > > migration/qemu-file-channel.c:78
+>
+> > > >
+>
+> > > > #3  0x00007f658e3e97fc in qemu_fill_buffer (f=0x7f65907cb800) at
+>
+> > > > migration/qemu-file.c:295
+>
+> > > >
+>
+> > > > #4  0x00007f658e3ea2e1 in qemu_peek_byte (address@hidden,
+>
+> > > > address@hidden) at migration/qemu-file.c:555
+>
+> > > >
+>
+> > > > #5  0x00007f658e3ea34b in qemu_get_byte (address@hidden) at
+>
+> > > > migration/qemu-file.c:568
+>
+> > > >
+>
+> > > > #6  0x00007f658e3ea552 in qemu_get_be32 (address@hidden) at
+>
+> > > > migration/qemu-file.c:648
+>
+> > > >
+>
+> > > > #7  0x00007f658e3e66e5 in colo_receive_message (f=0x7f65907cb800,
+>
+> > > > address@hidden) at migration/colo.c:244
+>
+> > > >
+>
+> > > > #8  0x00007f658e3e681e in colo_receive_check_message (f=<optimized
+>
+> > > > out>, address@hidden,
+>
+> > > > address@hidden)
+>
+> > > >
+>
+> > > >     at migration/colo.c:264
+>
+> > > >
+>
+> > > > #9  0x00007f658e3e740e in colo_process_incoming_thread
+>
+> > > > (opaque=0x7f658eb30360 <mis_current.31286>) at migration/colo.c:577
+>
+> > > >
+>
+> > > > #10 0x00007f658be09df3 in start_thread () from /lib64/libpthread.so.0
+>
+> > > >
+>
+> > > > #11 0x00007f65881983ed in clone () from /lib64/libc.so.6
+>
+> > > >
+>
+> > > > (gdb) p ioc->name
+>
+> > > >
+>
+> > > > $2 = 0x7f658ff7d5c0 "migration-socket-incoming"
+>
+> > > >
+>
+> > > > (gdb) p ioc->features        Do not support
+>
+> > > QIO_CHANNEL_FEATURE_SHUTDOWN
+>
+> > > >
+>
+> > > > $3 = 0
+>
+> > > >
+>
+> > > >
+>
+> > > > (gdb) bt
+>
+> > > >
+>
+> > > > #0  socket_accept_incoming_migration (ioc=0x7fdcceeafa90,
+>
+> > > > condition=G_IO_IN, opaque=0x7fdcceeafa90) at migration/socket.c:137
+>
+> > > >
+>
+> > > > #1  0x00007fdcc6966350 in g_main_dispatch (context=<optimized out>) at
+>
+> > > > gmain.c:3054
+>
+> > > >
+>
+> > > > #2  g_main_context_dispatch (context=<optimized out>,
+>
+> > > > address@hidden) at gmain.c:3630
+>
+> > > >
+>
+> > > > #3  0x00007fdccb8a6dcc in glib_pollfds_poll () at util/main-loop.c:213
+>
+> > > >
+>
+> > > > #4  os_host_main_loop_wait (timeout=<optimized out>) at
+>
+> > > > util/main-loop.c:258
+>
+> > > >
+>
+> > > > #5  main_loop_wait (address@hidden) at
+>
+> > > > util/main-loop.c:506
+>
+> > > >
+>
+> > > > #6  0x00007fdccb526187 in main_loop () at vl.c:1898
+>
+> > > >
+>
+> > > > #7  main (argc=<optimized out>, argv=<optimized out>, envp=<optimized
+>
+> > > > out>) at vl.c:4709
+>
+> > > >
+>
+> > > > (gdb) p ioc->features
+>
+> > > >
+>
+> > > > $1 = 6
+>
+> > > >
+>
+> > > > (gdb) p ioc->name
+>
+> > > >
+>
+> > > > $2 = 0x7fdcce1b1ab0 "migration-socket-listener"
+>
+> > > >
+>
+> > > >
+>
+> > > > May be socket_accept_incoming_migration should
+>
+> > > > call qio_channel_set_feature(ioc, QIO_CHANNEL_FEATURE_SHUTDOWN)??
+>
+> > > >
+>
+> > > >
+>
+> > > > thank you.
+>
+> > > >
+>
+> > > >
+>
+> > > >
+>
+> > > >
+>
+> > > >
+>
+> > > > 原始邮件
+>
+> > > > address@hidden
+>
+> > > > address@hidden
+>
+> > > > address@hidden@huawei.com>
+>
+> > > > *日 期 :*2017年03月16日 14:46
+>
+> > > > *主 题 :**Re: [Qemu-devel] COLO failover hang*
+>
+> > > >
+>
+> > > >
+>
+> > > >
+>
+> > > >
+>
+> > > > On 03/15/2017 05:06 PM, wangguang wrote:
+>
+> > > > >   am testing QEMU COLO feature described here [QEMU
+>
+> > > > > Wiki](
+http://wiki.qemu-project.org/Features/COLO
+).
+>
+> > > > >
+>
+> > > > > When the Primary Node panic,the Secondary Node qemu hang.
+>
+> > > > > hang at recvmsg in qio_channel_socket_readv.
+>
+> > > > > And  I run  { 'execute': 'nbd-server-stop' } and { "execute":
+>
+> > > > > "x-colo-lost-heartbeat" } in Secondary VM's
+>
+> > > > > monitor,the  Secondary Node qemu still hang at recvmsg .
+>
+> > > > >
+>
+> > > > > I found that the colo in qemu is not complete yet.
+>
+> > > > > Do the colo have any plan for development?
+>
+> > > >
+>
+> > > > Yes, We are developing. You can see some of patch we pushing.
+>
+> > > >
+>
+> > > > > Has anyone ever run it successfully? Any help is appreciated!
+>
+> > > >
+>
+> > > > In our internal version can run it successfully,
+>
+> > > > The failover detail you can ask Zhanghailiang for help.
+>
+> > > > Next time if you have some question about COLO,
+>
+> > > > please cc me and zhanghailiang address@hidden
+>
+> > > >
+>
+> > > >
+>
+> > > > Thanks
+>
+> > > > Zhang Chen
+>
+> > > >
+>
+> > > >
+>
+> > > > >
+>
+> > > > >
+>
+> > > > >
+>
+> > > > > centos7.2+qemu2.7.50
+>
+> > > > > (gdb) bt
+>
+> > > > > #0  0x00007f3e00cc86ad in recvmsg () from /lib64/libpthread.so.0
+>
+> > > > > #1  0x00007f3e0332b738 in qio_channel_socket_readv (ioc=<optimized
+>
+> > > out>,
+>
+> > > > > iov=<optimized out>, niov=<optimized out>, fds=0x0, nfds=0x0,
+>
+> > > errp=0x0) at
+>
+> > > > > io/channel-socket.c:497
+>
+> > > > > #2  0x00007f3e03329472 in qio_channel_read (address@hidden,
+>
+> > > > > address@hidden "", address@hidden,
+>
+> > > > > address@hidden) at io/channel.c:97
+>
+> > > > > #3  0x00007f3e032750e0 in channel_get_buffer (opaque=<optimized
+>
+> > > out>,
+>
+> > > > > buf=0x7f3e05910f38 "", pos=<optimized out>, size=32768) at
+>
+> > > > > migration/qemu-file-channel.c:78
+>
+> > > > > #4  0x00007f3e0327412c in qemu_fill_buffer (f=0x7f3e05910f00) at
+>
+> > > > > migration/qemu-file.c:257
+>
+> > > > > #5  0x00007f3e03274a41 in qemu_peek_byte (address@hidden,
+>
+> > > > > address@hidden) at migration/qemu-file.c:510
+>
+> > > > > #6  0x00007f3e03274aab in qemu_get_byte (address@hidden) at
+>
+> > > > > migration/qemu-file.c:523
+>
+> > > > > #7  0x00007f3e03274cb2 in qemu_get_be32 (address@hidden) at
+>
+> > > > > migration/qemu-file.c:603
+>
+> > > > > #8  0x00007f3e03271735 in colo_receive_message (f=0x7f3e05910f00,
+>
+> > > > > address@hidden) at migration/colo.c:215
+>
+> > > > > #9  0x00007f3e0327250d in colo_wait_handle_message
+>
+> > > (errp=0x7f3d62bfaa48,
+>
+> > > > > checkpoint_request=<synthetic pointer>, f=<optimized out>) at
+>
+> > > > > migration/colo.c:546
+>
+> > > > > #10 colo_process_incoming_thread (opaque=0x7f3e067245e0) at
+>
+> > > > > migration/colo.c:649
+>
+> > > > > #11 0x00007f3e00cc1df3 in start_thread () from
+>
+> > > /lib64/libpthread.so.0
+>
+> > > > > #12 0x00007f3dfc9c03ed in clone () from /lib64/libc.so.6
+>
+> > > > >
+>
+> > > > >
+>
+> > > > >
+>
+> > > > >
+>
+> > > > >
+>
+> > > > > --
+>
+> > > > > View this message in context:
+>
+> > >
+http://qemu.11.n7.nabble.com/COLO-failover-hang-tp473250.html
+>
+> > > > > Sent from the Developer mailing list archive at Nabble.com.
+>
+> > > > >
+>
+> > > > >
+>
+> > > > >
+>
+> > > > >
+>
+> > > >
+>
+> > > > --
+>
+> > > > Thanks
+>
+> > > > Zhang Chen
+>
+> > > >
+>
+> > > >
+>
+> > > >
+>
+> > > >
+>
+> > > >
+>
+> > >
+>
+> >
+>
+> --
+>
+> Dr. David Alan Gilbert / address@hidden / Manchester, UK
+>
+>
+>
+> .
+>
+>
+>
+--
+Dr. David Alan Gilbert / address@hidden / Manchester, UK
+
diff --git a/results/classifier/009/other/66743673 b/results/classifier/009/other/66743673
new file mode 100644
index 000000000..8919cd22c
--- /dev/null
+++ b/results/classifier/009/other/66743673
@@ -0,0 +1,374 @@
+other: 0.967
+permissions: 0.963
+semantic: 0.951
+PID: 0.949
+debug: 0.945
+boot: 0.938
+files: 0.937
+network: 0.930
+graphic: 0.927
+socket: 0.926
+vnc: 0.926
+device: 0.926
+performance: 0.905
+KVM: 0.891
+
+[Bug] QEMU TCG warnings after commit c6bd2dd63420 - HTT / CMP_LEG bits
+
+Hi Community,
+
+This email contains 3 bugs appear to share the same root cause.
+
+[1] We ran into the following warnings when running QEMU v10.0.0 in TCG mode:
+
+qemu-system-x86_64 \
+  -machine q35 \
+  -m 4G -smp 4 \
+  -kernel ./arch/x86/boot/bzImage \
+  -bios /usr/share/ovmf/OVMF.fd \
+  -drive file=~/kernel/rootfs.ext4,index=0,format=raw,media=disk \
+  -drive file=~/kernel/swap.img,index=1,format=raw,media=disk \
+  -nographic \
+  -append 'root=/dev/sda rw resume=/dev/sdb console=ttyS0 nokaslr'
+qemu-system-x86_64: warning: TCG doesn't support requested feature:
+CPUID.01H:EDX.ht [bit 28]
+qemu-system-x86_64: warning: TCG doesn't support requested feature:
+CPUID.80000001H:ECX.cmp-legacy [bit 1]
+(repeats 4 times, once per vCPU)
+Tracing the history shows that commit c6bd2dd63420 "i386/cpu: Set up CPUID_HT in
+x86_cpu_expand_features() instead of cpu_x86_cpuid()" is what introduced the
+warnings.
+Since that commit, TCG unconditionally advertises HTT (CPUID 1 EDX[28]) and
+CMP_LEG (CPUID 8000_0001 ECX[1]). Because TCG itself has no SMT support, these
+bits trigger the warnings above.
+[2] Also, Zhao pointed me to a similar report on GitLab:
+https://gitlab.com/qemu-project/qemu/-/issues/2894
+The symptoms there look identical to what we're seeing.
+By convention we file one issue per email, but these two appear to share the
+same root cause, so I'm describing them together here.
+[3] My colleague Alan noticed what appears to be a related problem: if we launch
+a guest with '-cpu <model>,-ht --enable-kvm', which means explicitly removing
+the ht flag, but the guest still reports HT(cat /proc/cpuinfo in linux guest)
+enabled. In other words, under KVM the ht bit seems to be forced on even when
+the user tries to disable it.
+Best regards,
+Ewan
+
+On 4/29/25 11:02 AM, Ewan Hai wrote:
+Hi Community,
+
+This email contains 3 bugs appear to share the same root cause.
+
+[1] We ran into the following warnings when running QEMU v10.0.0 in TCG mode:
+
+qemu-system-x86_64 \
+   -machine q35 \
+   -m 4G -smp 4 \
+   -kernel ./arch/x86/boot/bzImage \
+   -bios /usr/share/ovmf/OVMF.fd \
+   -drive file=~/kernel/rootfs.ext4,index=0,format=raw,media=disk \
+   -drive file=~/kernel/swap.img,index=1,format=raw,media=disk \
+   -nographic \
+   -append 'root=/dev/sda rw resume=/dev/sdb console=ttyS0 nokaslr'
+qemu-system-x86_64: warning: TCG doesn't support requested feature:
+CPUID.01H:EDX.ht [bit 28]
+qemu-system-x86_64: warning: TCG doesn't support requested feature:
+CPUID.80000001H:ECX.cmp-legacy [bit 1]
+(repeats 4 times, once per vCPU)
+Tracing the history shows that commit c6bd2dd63420 "i386/cpu: Set up CPUID_HT in
+x86_cpu_expand_features() instead of cpu_x86_cpuid()" is what introduced the
+warnings.
+Since that commit, TCG unconditionally advertises HTT (CPUID 1 EDX[28]) and
+CMP_LEG (CPUID 8000_0001 ECX[1]). Because TCG itself has no SMT support, these
+bits trigger the warnings above.
+[2] Also, Zhao pointed me to a similar report on GitLab:
+https://gitlab.com/qemu-project/qemu/-/issues/2894
+The symptoms there look identical to what we're seeing.
+By convention we file one issue per email, but these two appear to share the
+same root cause, so I'm describing them together here.
+[3] My colleague Alan noticed what appears to be a related problem: if we launch
+a guest with '-cpu <model>,-ht --enable-kvm', which means explicitly removing
+the ht flag, but the guest still reports HT(cat /proc/cpuinfo in linux guest)
+enabled. In other words, under KVM the ht bit seems to be forced on even when
+the user tries to disable it.
+XiaoYao reminded me that issue [3] stems from a different patch. Please ignore
+it for now—I'll start a separate thread to discuss that one independently.
+Best regards,
+Ewan
+
+On 4/29/2025 11:02 AM, Ewan Hai wrote:
+Hi Community,
+
+This email contains 3 bugs appear to share the same root cause.
+[1] We ran into the following warnings when running QEMU v10.0.0 in TCG
+mode:
+qemu-system-x86_64 \
+   -machine q35 \
+   -m 4G -smp 4 \
+   -kernel ./arch/x86/boot/bzImage \
+   -bios /usr/share/ovmf/OVMF.fd \
+   -drive file=~/kernel/rootfs.ext4,index=0,format=raw,media=disk \
+   -drive file=~/kernel/swap.img,index=1,format=raw,media=disk \
+   -nographic \
+   -append 'root=/dev/sda rw resume=/dev/sdb console=ttyS0 nokaslr'
+qemu-system-x86_64: warning: TCG doesn't support requested feature:
+CPUID.01H:EDX.ht [bit 28]
+qemu-system-x86_64: warning: TCG doesn't support requested feature:
+CPUID.80000001H:ECX.cmp-legacy [bit 1]
+(repeats 4 times, once per vCPU)
+Tracing the history shows that commit c6bd2dd63420 "i386/cpu: Set up
+CPUID_HT in x86_cpu_expand_features() instead of cpu_x86_cpuid()" is
+what introduced the warnings.
+Since that commit, TCG unconditionally advertises HTT (CPUID 1 EDX[28])
+and CMP_LEG (CPUID 8000_0001 ECX[1]). Because TCG itself has no SMT
+support, these bits trigger the warnings above.
+[2] Also, Zhao pointed me to a similar report on GitLab:
+https://gitlab.com/qemu-project/qemu/-/issues/2894
+The symptoms there look identical to what we're seeing.
+By convention we file one issue per email, but these two appear to share
+the same root cause, so I'm describing them together here.
+It was caused by my two patches. I think the fix can be as follow.
+If no objection from the community, I can submit the formal patch.
+
+diff --git a/target/i386/cpu.c b/target/i386/cpu.c
+index 1f970aa4daa6..fb95aadd6161 100644
+--- a/target/i386/cpu.c
++++ b/target/i386/cpu.c
+@@ -776,11 +776,12 @@ void x86_cpu_vendor_words2str(char *dst, uint32_t
+vendor1,
+CPUID_PAE | CPUID_MCE | CPUID_CX8 | CPUID_APIC | CPUID_SEP | \
+           CPUID_MTRR | CPUID_PGE | CPUID_MCA | CPUID_CMOV | CPUID_PAT | \
+           CPUID_PSE36 | CPUID_CLFLUSH | CPUID_ACPI | CPUID_MMX | \
+-          CPUID_FXSR | CPUID_SSE | CPUID_SSE2 | CPUID_SS | CPUID_DE)
++          CPUID_FXSR | CPUID_SSE | CPUID_SSE2 | CPUID_SS | CPUID_DE | \
++          CPUID_HT)
+           /* partly implemented:
+           CPUID_MTRR, CPUID_MCA, CPUID_CLFLUSH (needed for Win64) */
+           /* missing:
+-          CPUID_VME, CPUID_DTS, CPUID_SS, CPUID_HT, CPUID_TM, CPUID_PBE */
++          CPUID_VME, CPUID_DTS, CPUID_SS, CPUID_TM, CPUID_PBE */
+
+ /*
+  * Kernel-only features that can be shown to usermode programs even if
+@@ -848,7 +849,8 @@ void x86_cpu_vendor_words2str(char *dst, uint32_t
+vendor1,
+#define TCG_EXT3_FEATURES (CPUID_EXT3_LAHF_LM | CPUID_EXT3_SVM | \
+           CPUID_EXT3_CR8LEG | CPUID_EXT3_ABM | CPUID_EXT3_SSE4A | \
+-          CPUID_EXT3_3DNOWPREFETCH | CPUID_EXT3_KERNEL_FEATURES)
++          CPUID_EXT3_3DNOWPREFETCH | CPUID_EXT3_KERNEL_FEATURES | \
++          CPUID_EXT3_CMP_LEG)
+
+ #define TCG_EXT4_FEATURES 0
+[3] My colleague Alan noticed what appears to be a related problem: if
+we launch a guest with '-cpu <model>,-ht --enable-kvm', which means
+explicitly removing the ht flag, but the guest still reports HT(cat /
+proc/cpuinfo in linux guest) enabled. In other words, under KVM the ht
+bit seems to be forced on even when the user tries to disable it.
+This has been the behavior of QEMU for many years, not some regression
+introduced by my patches. We can discuss how to address it separately.
+Best regards,
+Ewan
+
+On Tue, Apr 29, 2025 at 01:55:59PM +0800, Xiaoyao Li wrote:
+>
+Date: Tue, 29 Apr 2025 13:55:59 +0800
+>
+From: Xiaoyao Li <xiaoyao.li@intel.com>
+>
+Subject: Re: [Bug] QEMU TCG warnings after commit c6bd2dd63420 - HTT /
+>
+CMP_LEG bits
+>
+>
+On 4/29/2025 11:02 AM, Ewan Hai wrote:
+>
+> Hi Community,
+>
+>
+>
+> This email contains 3 bugs appear to share the same root cause.
+>
+>
+>
+> [1] We ran into the following warnings when running QEMU v10.0.0 in TCG
+>
+> mode:
+>
+>
+>
+> qemu-system-x86_64 \
+>
+>    -machine q35 \
+>
+>    -m 4G -smp 4 \
+>
+>    -kernel ./arch/x86/boot/bzImage \
+>
+>    -bios /usr/share/ovmf/OVMF.fd \
+>
+>    -drive file=~/kernel/rootfs.ext4,index=0,format=raw,media=disk \
+>
+>    -drive file=~/kernel/swap.img,index=1,format=raw,media=disk \
+>
+>    -nographic \
+>
+>    -append 'root=/dev/sda rw resume=/dev/sdb console=ttyS0 nokaslr'
+>
+>
+>
+> qemu-system-x86_64: warning: TCG doesn't support requested feature:
+>
+> CPUID.01H:EDX.ht [bit 28]
+>
+> qemu-system-x86_64: warning: TCG doesn't support requested feature:
+>
+> CPUID.80000001H:ECX.cmp-legacy [bit 1]
+>
+> (repeats 4 times, once per vCPU)
+>
+>
+>
+> Tracing the history shows that commit c6bd2dd63420 "i386/cpu: Set up
+>
+> CPUID_HT in x86_cpu_expand_features() instead of cpu_x86_cpuid()" is
+>
+> what introduced the warnings.
+>
+>
+>
+> Since that commit, TCG unconditionally advertises HTT (CPUID 1 EDX[28])
+>
+> and CMP_LEG (CPUID 8000_0001 ECX[1]). Because TCG itself has no SMT
+>
+> support, these bits trigger the warnings above.
+>
+>
+>
+> [2] Also, Zhao pointed me to a similar report on GitLab:
+>
+>
+https://gitlab.com/qemu-project/qemu/-/issues/2894
+>
+> The symptoms there look identical to what we're seeing.
+>
+>
+>
+> By convention we file one issue per email, but these two appear to share
+>
+> the same root cause, so I'm describing them together here.
+>
+>
+It was caused by my two patches. I think the fix can be as follow.
+>
+If no objection from the community, I can submit the formal patch.
+>
+>
+diff --git a/target/i386/cpu.c b/target/i386/cpu.c
+>
+index 1f970aa4daa6..fb95aadd6161 100644
+>
+--- a/target/i386/cpu.c
+>
++++ b/target/i386/cpu.c
+>
+@@ -776,11 +776,12 @@ void x86_cpu_vendor_words2str(char *dst, uint32_t
+>
+vendor1,
+>
+CPUID_PAE | CPUID_MCE | CPUID_CX8 | CPUID_APIC | CPUID_SEP | \
+>
+CPUID_MTRR | CPUID_PGE | CPUID_MCA | CPUID_CMOV | CPUID_PAT | \
+>
+CPUID_PSE36 | CPUID_CLFLUSH | CPUID_ACPI | CPUID_MMX | \
+>
+-          CPUID_FXSR | CPUID_SSE | CPUID_SSE2 | CPUID_SS | CPUID_DE)
+>
++          CPUID_FXSR | CPUID_SSE | CPUID_SSE2 | CPUID_SS | CPUID_DE | \
+>
++          CPUID_HT)
+>
+/* partly implemented:
+>
+CPUID_MTRR, CPUID_MCA, CPUID_CLFLUSH (needed for Win64) */
+>
+/* missing:
+>
+-          CPUID_VME, CPUID_DTS, CPUID_SS, CPUID_HT, CPUID_TM, CPUID_PBE */
+>
++          CPUID_VME, CPUID_DTS, CPUID_SS, CPUID_TM, CPUID_PBE */
+>
+>
+/*
+>
+* Kernel-only features that can be shown to usermode programs even if
+>
+@@ -848,7 +849,8 @@ void x86_cpu_vendor_words2str(char *dst, uint32_t
+>
+vendor1,
+>
+>
+#define TCG_EXT3_FEATURES (CPUID_EXT3_LAHF_LM | CPUID_EXT3_SVM | \
+>
+CPUID_EXT3_CR8LEG | CPUID_EXT3_ABM | CPUID_EXT3_SSE4A | \
+>
+-          CPUID_EXT3_3DNOWPREFETCH | CPUID_EXT3_KERNEL_FEATURES)
+>
++          CPUID_EXT3_3DNOWPREFETCH | CPUID_EXT3_KERNEL_FEATURES | \
+>
++          CPUID_EXT3_CMP_LEG)
+>
+>
+#define TCG_EXT4_FEATURES 0
+This fix is fine for me...at least from SDM, HTT depends on topology and
+it should exist when user sets "-smp 4".
+
+>
+> [3] My colleague Alan noticed what appears to be a related problem: if
+>
+> we launch a guest with '-cpu <model>,-ht --enable-kvm', which means
+>
+> explicitly removing the ht flag, but the guest still reports HT(cat
+>
+> /proc/cpuinfo in linux guest) enabled. In other words, under KVM the ht
+>
+> bit seems to be forced on even when the user tries to disable it.
+>
+>
+XiaoYao reminded me that issue [3] stems from a different patch. Please
+>
+ignore it for now—I'll start a separate thread to discuss that one
+>
+independently.
+I haven't found any other thread :-).
+
+By the way, just curious, in what cases do you need to disbale the HT
+flag? "-smp 4" means 4 cores with 1 thread per core, and is it not
+enough?
+
+As for the “-ht” behavior, I'm also unsure whether this should be fixed
+or not - one possible consideration is whether “-ht” would be useful.
+
+On 5/8/25 5:04 PM, Zhao Liu wrote:
+[3] My colleague Alan noticed what appears to be a related problem: if
+we launch a guest with '-cpu <model>,-ht --enable-kvm', which means
+explicitly removing the ht flag, but the guest still reports HT(cat
+/proc/cpuinfo in linux guest) enabled. In other words, under KVM the ht
+bit seems to be forced on even when the user tries to disable it.
+XiaoYao reminded me that issue [3] stems from a different patch. Please
+ignore it for now—I'll start a separate thread to discuss that one
+independently.
+I haven't found any other thread :-).
+Please refer to
+https://lore.kernel.org/all/db6ae3bb-f4e5-4719-9beb-623fcff56af2@zhaoxin.com/
+.
+By the way, just curious, in what cases do you need to disbale the HT
+flag? "-smp 4" means 4 cores with 1 thread per core, and is it not
+enough?
+
+As for the “-ht” behavior, I'm also unsure whether this should be fixed
+or not - one possible consideration is whether “-ht” would be useful.
+I wasn't trying to target any specific use case, using "-ht" was simply a way to
+check how the ht feature behaves under both KVM and TCG. There's no special
+workload behind it; I just wanted to confirm that the flag is respected (or not)
+in each mode.
+
diff --git a/results/classifier/009/other/68897003 b/results/classifier/009/other/68897003
new file mode 100644
index 000000000..3cae4c9bb
--- /dev/null
+++ b/results/classifier/009/other/68897003
@@ -0,0 +1,726 @@
+other: 0.714
+graphic: 0.694
+permissions: 0.677
+PID: 0.677
+performance: 0.673
+semantic: 0.671
+debug: 0.663
+device: 0.647
+network: 0.614
+files: 0.608
+KVM: 0.598
+socket: 0.585
+boot: 0.569
+vnc: 0.525
+
+[Qemu-devel] [BUG] VM abort after migration
+
+Hi guys,
+
+We found a qemu core in our testing environment, the assertion
+'assert(bus->irq_count[i] == 0)' in pcibus_reset() was triggered and
+the bus->irq_count[i] is '-1'.
+
+Through analysis, it was happened after VM migration and we think
+it was caused by the following sequence:
+
+*Migration Source*
+1. save bus pci.0 state, including irq_count[x] ( =0 , old )
+2. save E1000:
+   e1000_pre_save
+    e1000_mit_timer
+     set_interrupt_cause
+      pci_set_irq --> update pci_dev->irq_state to 1 and
+                  update bus->irq_count[x] to 1 ( new )
+    the irq_state sent to dest.
+
+*Migration Dest*
+1. Receive the irq_count[x] of pci.0 is 0 , but the irq_state of e1000 is 1.
+2. If the e1000 need change irqline , it would call to pci_irq_handler(),
+  the irq_state maybe change to 0 and bus->irq_count[x] will become
+  -1 in this situation.
+3. do VM reboot then the assertion will be triggered.
+
+We also found some guys faced the similar problem:
+[1]
+https://lists.gnu.org/archive/html/qemu-devel/2016-11/msg02525.html
+[2]
+https://bugs.launchpad.net/qemu/+bug/1702621
+Is there some patches to fix this problem ?
+Can we save pcibus state after all the pci devs are saved ?
+
+Thanks,
+Longpeng(Mike)
+
+* longpeng (address@hidden) wrote:
+>
+Hi guys,
+>
+>
+We found a qemu core in our testing environment, the assertion
+>
+'assert(bus->irq_count[i] == 0)' in pcibus_reset() was triggered and
+>
+the bus->irq_count[i] is '-1'.
+>
+>
+Through analysis, it was happened after VM migration and we think
+>
+it was caused by the following sequence:
+>
+>
+*Migration Source*
+>
+1. save bus pci.0 state, including irq_count[x] ( =0 , old )
+>
+2. save E1000:
+>
+e1000_pre_save
+>
+e1000_mit_timer
+>
+set_interrupt_cause
+>
+pci_set_irq --> update pci_dev->irq_state to 1 and
+>
+update bus->irq_count[x] to 1 ( new )
+>
+the irq_state sent to dest.
+>
+>
+*Migration Dest*
+>
+1. Receive the irq_count[x] of pci.0 is 0 , but the irq_state of e1000 is 1.
+>
+2. If the e1000 need change irqline , it would call to pci_irq_handler(),
+>
+the irq_state maybe change to 0 and bus->irq_count[x] will become
+>
+-1 in this situation.
+>
+3. do VM reboot then the assertion will be triggered.
+>
+>
+We also found some guys faced the similar problem:
+>
+[1]
+https://lists.gnu.org/archive/html/qemu-devel/2016-11/msg02525.html
+>
+[2]
+https://bugs.launchpad.net/qemu/+bug/1702621
+>
+>
+Is there some patches to fix this problem ?
+I don't remember any.
+
+>
+Can we save pcibus state after all the pci devs are saved ?
+Does this problem only happen with e1000? I think so.
+If it's only e1000 I think we should fix it - I think once the VM is
+stopped for doing the device migration it shouldn't be raising
+interrupts.
+
+Dave
+
+>
+Thanks,
+>
+Longpeng(Mike)
+--
+Dr. David Alan Gilbert / address@hidden / Manchester, UK
+
+On 2019/7/8 下午5:47, Dr. David Alan Gilbert wrote:
+* longpeng (address@hidden) wrote:
+Hi guys,
+
+We found a qemu core in our testing environment, the assertion
+'assert(bus->irq_count[i] == 0)' in pcibus_reset() was triggered and
+the bus->irq_count[i] is '-1'.
+
+Through analysis, it was happened after VM migration and we think
+it was caused by the following sequence:
+
+*Migration Source*
+1. save bus pci.0 state, including irq_count[x] ( =0 , old )
+2. save E1000:
+    e1000_pre_save
+     e1000_mit_timer
+      set_interrupt_cause
+       pci_set_irq --> update pci_dev->irq_state to 1 and
+                   update bus->irq_count[x] to 1 ( new )
+     the irq_state sent to dest.
+
+*Migration Dest*
+1. Receive the irq_count[x] of pci.0 is 0 , but the irq_state of e1000 is 1.
+2. If the e1000 need change irqline , it would call to pci_irq_handler(),
+   the irq_state maybe change to 0 and bus->irq_count[x] will become
+   -1 in this situation.
+3. do VM reboot then the assertion will be triggered.
+
+We also found some guys faced the similar problem:
+[1]
+https://lists.gnu.org/archive/html/qemu-devel/2016-11/msg02525.html
+[2]
+https://bugs.launchpad.net/qemu/+bug/1702621
+Is there some patches to fix this problem ?
+I don't remember any.
+Can we save pcibus state after all the pci devs are saved ?
+Does this problem only happen with e1000? I think so.
+If it's only e1000 I think we should fix it - I think once the VM is
+stopped for doing the device migration it shouldn't be raising
+interrupts.
+I wonder maybe we can simply fix this by no setting ICS on pre_save()
+but scheduling mit timer unconditionally in post_load().
+Thanks
+Dave
+Thanks,
+Longpeng(Mike)
+--
+Dr. David Alan Gilbert / address@hidden / Manchester, UK
+
+在 2019/7/10 11:25, Jason Wang 写道:
+>
+>
+On 2019/7/8 下午5:47, Dr. David Alan Gilbert wrote:
+>
+> * longpeng (address@hidden) wrote:
+>
+>> Hi guys,
+>
+>>
+>
+>> We found a qemu core in our testing environment, the assertion
+>
+>> 'assert(bus->irq_count[i] == 0)' in pcibus_reset() was triggered and
+>
+>> the bus->irq_count[i] is '-1'.
+>
+>>
+>
+>> Through analysis, it was happened after VM migration and we think
+>
+>> it was caused by the following sequence:
+>
+>>
+>
+>> *Migration Source*
+>
+>> 1. save bus pci.0 state, including irq_count[x] ( =0 , old )
+>
+>> 2. save E1000:
+>
+>>     e1000_pre_save
+>
+>>      e1000_mit_timer
+>
+>>       set_interrupt_cause
+>
+>>        pci_set_irq --> update pci_dev->irq_state to 1 and
+>
+>>                    update bus->irq_count[x] to 1 ( new )
+>
+>>      the irq_state sent to dest.
+>
+>>
+>
+>> *Migration Dest*
+>
+>> 1. Receive the irq_count[x] of pci.0 is 0 , but the irq_state of e1000 is 1.
+>
+>> 2. If the e1000 need change irqline , it would call to pci_irq_handler(),
+>
+>>    the irq_state maybe change to 0 and bus->irq_count[x] will become
+>
+>>    -1 in this situation.
+>
+>> 3. do VM reboot then the assertion will be triggered.
+>
+>>
+>
+>> We also found some guys faced the similar problem:
+>
+>> [1]
+https://lists.gnu.org/archive/html/qemu-devel/2016-11/msg02525.html
+>
+>> [2]
+https://bugs.launchpad.net/qemu/+bug/1702621
+>
+>>
+>
+>> Is there some patches to fix this problem ?
+>
+> I don't remember any.
+>
+>
+>
+>> Can we save pcibus state after all the pci devs are saved ?
+>
+> Does this problem only happen with e1000? I think so.
+>
+> If it's only e1000 I think we should fix it - I think once the VM is
+>
+> stopped for doing the device migration it shouldn't be raising
+>
+> interrupts.
+>
+>
+>
+I wonder maybe we can simply fix this by no setting ICS on pre_save() but
+>
+scheduling mit timer unconditionally in post_load().
+>
+I also think this is a bug of e1000 because we find more cores with the same
+frame thease days.
+
+I'm not familiar with e1000 so hope someone could fix it, thanks. :)
+
+>
+Thanks
+>
+>
+>
+>
+>
+> Dave
+>
+>
+>
+>> Thanks,
+>
+>> Longpeng(Mike)
+>
+> --
+>
+> Dr. David Alan Gilbert / address@hidden / Manchester, UK
+>
+>
+.
+>
+-- 
+Regards,
+Longpeng(Mike)
+
+On 2019/7/10 上午11:36, Longpeng (Mike) wrote:
+在 2019/7/10 11:25, Jason Wang 写道:
+On 2019/7/8 下午5:47, Dr. David Alan Gilbert wrote:
+* longpeng (address@hidden) wrote:
+Hi guys,
+
+We found a qemu core in our testing environment, the assertion
+'assert(bus->irq_count[i] == 0)' in pcibus_reset() was triggered and
+the bus->irq_count[i] is '-1'.
+
+Through analysis, it was happened after VM migration and we think
+it was caused by the following sequence:
+
+*Migration Source*
+1. save bus pci.0 state, including irq_count[x] ( =0 , old )
+2. save E1000:
+     e1000_pre_save
+      e1000_mit_timer
+       set_interrupt_cause
+        pci_set_irq --> update pci_dev->irq_state to 1 and
+                    update bus->irq_count[x] to 1 ( new )
+      the irq_state sent to dest.
+
+*Migration Dest*
+1. Receive the irq_count[x] of pci.0 is 0 , but the irq_state of e1000 is 1.
+2. If the e1000 need change irqline , it would call to pci_irq_handler(),
+    the irq_state maybe change to 0 and bus->irq_count[x] will become
+    -1 in this situation.
+3. do VM reboot then the assertion will be triggered.
+
+We also found some guys faced the similar problem:
+[1]
+https://lists.gnu.org/archive/html/qemu-devel/2016-11/msg02525.html
+[2]
+https://bugs.launchpad.net/qemu/+bug/1702621
+Is there some patches to fix this problem ?
+I don't remember any.
+Can we save pcibus state after all the pci devs are saved ?
+Does this problem only happen with e1000? I think so.
+If it's only e1000 I think we should fix it - I think once the VM is
+stopped for doing the device migration it shouldn't be raising
+interrupts.
+I wonder maybe we can simply fix this by no setting ICS on pre_save() but
+scheduling mit timer unconditionally in post_load().
+I also think this is a bug of e1000 because we find more cores with the same
+frame thease days.
+
+I'm not familiar with e1000 so hope someone could fix it, thanks. :)
+Draft a path in attachment, please test.
+
+Thanks
+Thanks
+Dave
+Thanks,
+Longpeng(Mike)
+--
+Dr. David Alan Gilbert / address@hidden / Manchester, UK
+.
+0001-e1000-don-t-raise-interrupt-in-pre_save.patch
+Description:
+Text Data
+
+在 2019/7/10 11:57, Jason Wang 写道:
+>
+>
+On 2019/7/10 上午11:36, Longpeng (Mike) wrote:
+>
+> 在 2019/7/10 11:25, Jason Wang 写道:
+>
+>> On 2019/7/8 下午5:47, Dr. David Alan Gilbert wrote:
+>
+>>> * longpeng (address@hidden) wrote:
+>
+>>>> Hi guys,
+>
+>>>>
+>
+>>>> We found a qemu core in our testing environment, the assertion
+>
+>>>> 'assert(bus->irq_count[i] == 0)' in pcibus_reset() was triggered and
+>
+>>>> the bus->irq_count[i] is '-1'.
+>
+>>>>
+>
+>>>> Through analysis, it was happened after VM migration and we think
+>
+>>>> it was caused by the following sequence:
+>
+>>>>
+>
+>>>> *Migration Source*
+>
+>>>> 1. save bus pci.0 state, including irq_count[x] ( =0 , old )
+>
+>>>> 2. save E1000:
+>
+>>>>      e1000_pre_save
+>
+>>>>       e1000_mit_timer
+>
+>>>>        set_interrupt_cause
+>
+>>>>         pci_set_irq --> update pci_dev->irq_state to 1 and
+>
+>>>>                     update bus->irq_count[x] to 1 ( new )
+>
+>>>>       the irq_state sent to dest.
+>
+>>>>
+>
+>>>> *Migration Dest*
+>
+>>>> 1. Receive the irq_count[x] of pci.0 is 0 , but the irq_state of e1000 is
+>
+>>>> 1.
+>
+>>>> 2. If the e1000 need change irqline , it would call to pci_irq_handler(),
+>
+>>>>     the irq_state maybe change to 0 and bus->irq_count[x] will become
+>
+>>>>     -1 in this situation.
+>
+>>>> 3. do VM reboot then the assertion will be triggered.
+>
+>>>>
+>
+>>>> We also found some guys faced the similar problem:
+>
+>>>> [1]
+https://lists.gnu.org/archive/html/qemu-devel/2016-11/msg02525.html
+>
+>>>> [2]
+https://bugs.launchpad.net/qemu/+bug/1702621
+>
+>>>>
+>
+>>>> Is there some patches to fix this problem ?
+>
+>>> I don't remember any.
+>
+>>>
+>
+>>>> Can we save pcibus state after all the pci devs are saved ?
+>
+>>> Does this problem only happen with e1000? I think so.
+>
+>>> If it's only e1000 I think we should fix it - I think once the VM is
+>
+>>> stopped for doing the device migration it shouldn't be raising
+>
+>>> interrupts.
+>
+>>
+>
+>> I wonder maybe we can simply fix this by no setting ICS on pre_save() but
+>
+>> scheduling mit timer unconditionally in post_load().
+>
+>>
+>
+> I also think this is a bug of e1000 because we find more cores with the same
+>
+> frame thease days.
+>
+>
+>
+> I'm not familiar with e1000 so hope someone could fix it, thanks. :)
+>
+>
+>
+>
+Draft a path in attachment, please test.
+>
+Thanks. We'll test it for a few weeks and then give you the feedback. :)
+
+>
+Thanks
+>
+>
+>
+>> Thanks
+>
+>>
+>
+>>
+>
+>>> Dave
+>
+>>>
+>
+>>>> Thanks,
+>
+>>>> Longpeng(Mike)
+>
+>>> --Â
+>
+>>> Dr. David Alan Gilbert / address@hidden / Manchester, UK
+>
+>> .
+>
+>>
+-- 
+Regards,
+Longpeng(Mike)
+
+在 2019/7/10 11:57, Jason Wang 写道:
+>
+>
+On 2019/7/10 上午11:36, Longpeng (Mike) wrote:
+>
+> 在 2019/7/10 11:25, Jason Wang 写道:
+>
+>> On 2019/7/8 下午5:47, Dr. David Alan Gilbert wrote:
+>
+>>> * longpeng (address@hidden) wrote:
+>
+>>>> Hi guys,
+>
+>>>>
+>
+>>>> We found a qemu core in our testing environment, the assertion
+>
+>>>> 'assert(bus->irq_count[i] == 0)' in pcibus_reset() was triggered and
+>
+>>>> the bus->irq_count[i] is '-1'.
+>
+>>>>
+>
+>>>> Through analysis, it was happened after VM migration and we think
+>
+>>>> it was caused by the following sequence:
+>
+>>>>
+>
+>>>> *Migration Source*
+>
+>>>> 1. save bus pci.0 state, including irq_count[x] ( =0 , old )
+>
+>>>> 2. save E1000:
+>
+>>>>      e1000_pre_save
+>
+>>>>       e1000_mit_timer
+>
+>>>>        set_interrupt_cause
+>
+>>>>         pci_set_irq --> update pci_dev->irq_state to 1 and
+>
+>>>>                     update bus->irq_count[x] to 1 ( new )
+>
+>>>>       the irq_state sent to dest.
+>
+>>>>
+>
+>>>> *Migration Dest*
+>
+>>>> 1. Receive the irq_count[x] of pci.0 is 0 , but the irq_state of e1000 is
+>
+>>>> 1.
+>
+>>>> 2. If the e1000 need change irqline , it would call to pci_irq_handler(),
+>
+>>>>     the irq_state maybe change to 0 and bus->irq_count[x] will become
+>
+>>>>     -1 in this situation.
+>
+>>>> 3. do VM reboot then the assertion will be triggered.
+>
+>>>>
+>
+>>>> We also found some guys faced the similar problem:
+>
+>>>> [1]
+https://lists.gnu.org/archive/html/qemu-devel/2016-11/msg02525.html
+>
+>>>> [2]
+https://bugs.launchpad.net/qemu/+bug/1702621
+>
+>>>>
+>
+>>>> Is there some patches to fix this problem ?
+>
+>>> I don't remember any.
+>
+>>>
+>
+>>>> Can we save pcibus state after all the pci devs are saved ?
+>
+>>> Does this problem only happen with e1000? I think so.
+>
+>>> If it's only e1000 I think we should fix it - I think once the VM is
+>
+>>> stopped for doing the device migration it shouldn't be raising
+>
+>>> interrupts.
+>
+>>
+>
+>> I wonder maybe we can simply fix this by no setting ICS on pre_save() but
+>
+>> scheduling mit timer unconditionally in post_load().
+>
+>>
+>
+> I also think this is a bug of e1000 because we find more cores with the same
+>
+> frame thease days.
+>
+>
+>
+> I'm not familiar with e1000 so hope someone could fix it, thanks. :)
+>
+>
+>
+>
+Draft a path in attachment, please test.
+>
+Hi Jason,
+
+We've tested the patch for about two weeks, everything went well, thanks!
+
+Feel free to add my:
+Reported-and-tested-by: Longpeng <address@hidden>
+
+>
+Thanks
+>
+>
+>
+>> Thanks
+>
+>>
+>
+>>
+>
+>>> Dave
+>
+>>>
+>
+>>>> Thanks,
+>
+>>>> Longpeng(Mike)
+>
+>>> --Â
+>
+>>> Dr. David Alan Gilbert / address@hidden / Manchester, UK
+>
+>> .
+>
+>>
+-- 
+Regards,
+Longpeng(Mike)
+
+On 2019/7/27 下午2:10, Longpeng (Mike) wrote:
+在 2019/7/10 11:57, Jason Wang 写道:
+On 2019/7/10 上午11:36, Longpeng (Mike) wrote:
+在 2019/7/10 11:25, Jason Wang 写道:
+On 2019/7/8 下午5:47, Dr. David Alan Gilbert wrote:
+* longpeng (address@hidden) wrote:
+Hi guys,
+
+We found a qemu core in our testing environment, the assertion
+'assert(bus->irq_count[i] == 0)' in pcibus_reset() was triggered and
+the bus->irq_count[i] is '-1'.
+
+Through analysis, it was happened after VM migration and we think
+it was caused by the following sequence:
+
+*Migration Source*
+1. save bus pci.0 state, including irq_count[x] ( =0 , old )
+2. save E1000:
+      e1000_pre_save
+       e1000_mit_timer
+        set_interrupt_cause
+         pci_set_irq --> update pci_dev->irq_state to 1 and
+                     update bus->irq_count[x] to 1 ( new )
+       the irq_state sent to dest.
+
+*Migration Dest*
+1. Receive the irq_count[x] of pci.0 is 0 , but the irq_state of e1000 is 1.
+2. If the e1000 need change irqline , it would call to pci_irq_handler(),
+     the irq_state maybe change to 0 and bus->irq_count[x] will become
+     -1 in this situation.
+3. do VM reboot then the assertion will be triggered.
+
+We also found some guys faced the similar problem:
+[1]
+https://lists.gnu.org/archive/html/qemu-devel/2016-11/msg02525.html
+[2]
+https://bugs.launchpad.net/qemu/+bug/1702621
+Is there some patches to fix this problem ?
+I don't remember any.
+Can we save pcibus state after all the pci devs are saved ?
+Does this problem only happen with e1000? I think so.
+If it's only e1000 I think we should fix it - I think once the VM is
+stopped for doing the device migration it shouldn't be raising
+interrupts.
+I wonder maybe we can simply fix this by no setting ICS on pre_save() but
+scheduling mit timer unconditionally in post_load().
+I also think this is a bug of e1000 because we find more cores with the same
+frame thease days.
+
+I'm not familiar with e1000 so hope someone could fix it, thanks. :)
+Draft a path in attachment, please test.
+Hi Jason,
+
+We've tested the patch for about two weeks, everything went well, thanks!
+
+Feel free to add my:
+Reported-and-tested-by: Longpeng <address@hidden>
+Applied.
+
+Thanks
+Thanks
+Thanks
+Dave
+Thanks,
+Longpeng(Mike)
+--
+Dr. David Alan Gilbert / address@hidden / Manchester, UK
+.
+
diff --git a/results/classifier/009/other/70021271 b/results/classifier/009/other/70021271
new file mode 100644
index 000000000..f35111615
--- /dev/null
+++ b/results/classifier/009/other/70021271
@@ -0,0 +1,7458 @@
+other: 0.963
+graphic: 0.958
+permissions: 0.957
+debug: 0.956
+performance: 0.949
+KVM: 0.949
+semantic: 0.946
+vnc: 0.900
+device: 0.887
+PID: 0.880
+socket: 0.873
+network: 0.873
+boot: 0.872
+files: 0.867
+
+[Qemu-devel] [BUG]Unassigned mem write during pci device hot-plug
+
+Hi all,
+
+In our test, we configured VM with several pci-bridges and a virtio-net nic 
+been attached with bus 4,
+After VM is startup, We ping this nic from host to judge if it is working 
+normally. Then, we hot add pci devices to this VM with bus 0.
+We  found the virtio-net NIC in bus 4 is not working (can not connect) 
+occasionally, as it kick virtio backend failure with error below:
+    Unassigned mem write 00000000fc803004 = 0x1
+
+memory-region: pci_bridge_pci
+  0000000000000000-ffffffffffffffff (prio 0, RW): pci_bridge_pci
+    00000000fc800000-00000000fc803fff (prio 1, RW): virtio-pci
+      00000000fc800000-00000000fc800fff (prio 0, RW): virtio-pci-common
+      00000000fc801000-00000000fc801fff (prio 0, RW): virtio-pci-isr
+      00000000fc802000-00000000fc802fff (prio 0, RW): virtio-pci-device
+      00000000fc803000-00000000fc803fff (prio 0, RW): virtio-pci-notify  <- io 
+mem unassigned
+      …
+
+We caught an exceptional address changing while this problem happened, show as 
+follow:
+Before pci_bridge_update_mappings:
+      00000000fc000000-00000000fc1fffff (prio 1, RW): alias pci_bridge_pref_mem 
+@pci_bridge_pci 00000000fc000000-00000000fc1fffff
+      00000000fc200000-00000000fc3fffff (prio 1, RW): alias pci_bridge_pref_mem 
+@pci_bridge_pci 00000000fc200000-00000000fc3fffff
+      00000000fc400000-00000000fc5fffff (prio 1, RW): alias pci_bridge_pref_mem 
+@pci_bridge_pci 00000000fc400000-00000000fc5fffff
+      00000000fc600000-00000000fc7fffff (prio 1, RW): alias pci_bridge_pref_mem 
+@pci_bridge_pci 00000000fc600000-00000000fc7fffff
+      00000000fc800000-00000000fc9fffff (prio 1, RW): alias pci_bridge_pref_mem 
+@pci_bridge_pci 00000000fc800000-00000000fc9fffff <- correct Adress Spce
+      00000000fca00000-00000000fcbfffff (prio 1, RW): alias pci_bridge_pref_mem 
+@pci_bridge_pci 00000000fca00000-00000000fcbfffff
+      00000000fcc00000-00000000fcdfffff (prio 1, RW): alias pci_bridge_pref_mem 
+@pci_bridge_pci 00000000fcc00000-00000000fcdfffff
+      00000000fce00000-00000000fcffffff (prio 1, RW): alias pci_bridge_pref_mem 
+@pci_bridge_pci 00000000fce00000-00000000fcffffff
+
+After pci_bridge_update_mappings:
+      00000000fda00000-00000000fdbfffff (prio 1, RW): alias pci_bridge_mem 
+@pci_bridge_pci 00000000fda00000-00000000fdbfffff
+      00000000fdc00000-00000000fddfffff (prio 1, RW): alias pci_bridge_mem 
+@pci_bridge_pci 00000000fdc00000-00000000fddfffff
+      00000000fde00000-00000000fdffffff (prio 1, RW): alias pci_bridge_mem 
+@pci_bridge_pci 00000000fde00000-00000000fdffffff
+      00000000fe000000-00000000fe1fffff (prio 1, RW): alias pci_bridge_mem 
+@pci_bridge_pci 00000000fe000000-00000000fe1fffff
+      00000000fe200000-00000000fe3fffff (prio 1, RW): alias pci_bridge_mem 
+@pci_bridge_pci 00000000fe200000-00000000fe3fffff
+      00000000fe400000-00000000fe5fffff (prio 1, RW): alias pci_bridge_mem 
+@pci_bridge_pci 00000000fe400000-00000000fe5fffff
+      00000000fe600000-00000000fe7fffff (prio 1, RW): alias pci_bridge_mem 
+@pci_bridge_pci 00000000fe600000-00000000fe7fffff
+      00000000fe800000-00000000fe9fffff (prio 1, RW): alias pci_bridge_mem 
+@pci_bridge_pci 00000000fe800000-00000000fe9fffff
+      fffffffffc800000-fffffffffc800000 (prio 1, RW): alias pci_bridge_pref_mem 
+@pci_bridge_pci fffffffffc800000-fffffffffc800000   <- Exceptional Adress Space
+
+We have figured out why this address becomes this value,  according to pci 
+spec,  pci driver can get BAR address size by writing 0xffffffff to
+the pci register firstly, and then read back the value from this register.
+We didn't handle this value  specially while process pci write in qemu, the 
+function call stack is:
+Pci_bridge_dev_write_config
+-> pci_bridge_write_config
+-> pci_default_write_config (we update the config[address] value here to 
+fffffffffc800000, which should be 0xfc800000 )
+-> pci_bridge_update_mappings
+                ->pci_bridge_region_del(br, br->windows);
+-> pci_bridge_region_init
+                                                                
+->pci_bridge_init_alias (here pci_bridge_get_base, we use the wrong value 
+fffffffffc800000)
+                                                -> 
+memory_region_transaction_commit
+
+So, as we can see, we use the wrong base address in qemu to update the memory 
+regions, though, we update the base address to
+The correct value after pci driver in VM write the original value back, the 
+virtio NIC in bus 4 may still sends net packets concurrently with
+The wrong memory region address.
+
+We have tried to skip the memory region update action in qemu while detect pci 
+write with 0xffffffff value, and it does work, but
+This seems to be not gently.
+
+diff --git a/hw/pci/pci_bridge.c b/hw/pci/pci_bridge.c
+index b2e50c3..84b405d 100644
+--- a/hw/pci/pci_bridge.c
++++ b/hw/pci/pci_bridge.c
+@@ -256,7 +256,8 @@ void pci_bridge_write_config(PCIDevice *d,
+     pci_default_write_config(d, address, val, len);
+-    if (ranges_overlap(address, len, PCI_COMMAND, 2) ||
++    if ( (val != 0xffffffff) &&
++        (ranges_overlap(address, len, PCI_COMMAND, 2) ||
+         /* io base/limit */
+         ranges_overlap(address, len, PCI_IO_BASE, 2) ||
+@@ -266,7 +267,7 @@ void pci_bridge_write_config(PCIDevice *d,
+         ranges_overlap(address, len, PCI_MEMORY_BASE, 20) ||
+         /* vga enable */
+-        ranges_overlap(address, len, PCI_BRIDGE_CONTROL, 2)) {
++        ranges_overlap(address, len, PCI_BRIDGE_CONTROL, 2))) {
+         pci_bridge_update_mappings(s);
+     }
+
+Thinks,
+Xu
+
+On Sat, Dec 08, 2018 at 11:58:59AM +0000, xuyandong wrote:
+>
+Hi all,
+>
+>
+>
+>
+In our test, we configured VM with several pci-bridges and a virtio-net nic
+>
+been attached with bus 4,
+>
+>
+After VM is startup, We ping this nic from host to judge if it is working
+>
+normally. Then, we hot add pci devices to this VM with bus 0.
+>
+>
+We  found the virtio-net NIC in bus 4 is not working (can not connect)
+>
+occasionally, as it kick virtio backend failure with error below:
+>
+>
+Unassigned mem write 00000000fc803004 = 0x1
+Thanks for the report. Which guest was used to produce this problem?
+
+-- 
+MST
+
+n Sat, Dec 08, 2018 at 11:58:59AM +0000, xuyandong wrote:
+>
+> Hi all,
+>
+>
+>
+>
+>
+>
+>
+> In our test, we configured VM with several pci-bridges and a
+>
+> virtio-net nic been attached with bus 4,
+>
+>
+>
+> After VM is startup, We ping this nic from host to judge if it is
+>
+> working normally. Then, we hot add pci devices to this VM with bus 0.
+>
+>
+>
+> We  found the virtio-net NIC in bus 4 is not working (can not connect)
+>
+> occasionally, as it kick virtio backend failure with error below:
+>
+>
+>
+>     Unassigned mem write 00000000fc803004 = 0x1
+>
+>
+Thanks for the report. Which guest was used to produce this problem?
+>
+>
+--
+>
+MST
+I was seeing this problem when I hotplug a VFIO device to guest CentOS 7.4,
+after that I compiled the latest Linux kernel and it also contains this problem.
+
+Thinks,
+Xu
+
+On Sat, Dec 08, 2018 at 11:58:59AM +0000, xuyandong wrote:
+>
+Hi all,
+>
+>
+>
+>
+In our test, we configured VM with several pci-bridges and a virtio-net nic
+>
+been attached with bus 4,
+>
+>
+After VM is startup, We ping this nic from host to judge if it is working
+>
+normally. Then, we hot add pci devices to this VM with bus 0.
+>
+>
+We  found the virtio-net NIC in bus 4 is not working (can not connect)
+>
+occasionally, as it kick virtio backend failure with error below:
+>
+>
+Unassigned mem write 00000000fc803004 = 0x1
+>
+>
+>
+>
+memory-region: pci_bridge_pci
+>
+>
+0000000000000000-ffffffffffffffff (prio 0, RW): pci_bridge_pci
+>
+>
+00000000fc800000-00000000fc803fff (prio 1, RW): virtio-pci
+>
+>
+00000000fc800000-00000000fc800fff (prio 0, RW): virtio-pci-common
+>
+>
+00000000fc801000-00000000fc801fff (prio 0, RW): virtio-pci-isr
+>
+>
+00000000fc802000-00000000fc802fff (prio 0, RW): virtio-pci-device
+>
+>
+00000000fc803000-00000000fc803fff (prio 0, RW): virtio-pci-notify  <- io
+>
+mem unassigned
+>
+>
+…
+>
+>
+>
+>
+We caught an exceptional address changing while this problem happened, show as
+>
+follow:
+>
+>
+Before pci_bridge_update_mappings:
+>
+>
+00000000fc000000-00000000fc1fffff (prio 1, RW): alias
+>
+pci_bridge_pref_mem
+>
+@pci_bridge_pci 00000000fc000000-00000000fc1fffff
+>
+>
+00000000fc200000-00000000fc3fffff (prio 1, RW): alias
+>
+pci_bridge_pref_mem
+>
+@pci_bridge_pci 00000000fc200000-00000000fc3fffff
+>
+>
+00000000fc400000-00000000fc5fffff (prio 1, RW): alias
+>
+pci_bridge_pref_mem
+>
+@pci_bridge_pci 00000000fc400000-00000000fc5fffff
+>
+>
+00000000fc600000-00000000fc7fffff (prio 1, RW): alias
+>
+pci_bridge_pref_mem
+>
+@pci_bridge_pci 00000000fc600000-00000000fc7fffff
+>
+>
+00000000fc800000-00000000fc9fffff (prio 1, RW): alias
+>
+pci_bridge_pref_mem
+>
+@pci_bridge_pci 00000000fc800000-00000000fc9fffff <- correct Adress Spce
+>
+>
+00000000fca00000-00000000fcbfffff (prio 1, RW): alias
+>
+pci_bridge_pref_mem
+>
+@pci_bridge_pci 00000000fca00000-00000000fcbfffff
+>
+>
+00000000fcc00000-00000000fcdfffff (prio 1, RW): alias
+>
+pci_bridge_pref_mem
+>
+@pci_bridge_pci 00000000fcc00000-00000000fcdfffff
+>
+>
+00000000fce00000-00000000fcffffff (prio 1, RW): alias
+>
+pci_bridge_pref_mem
+>
+@pci_bridge_pci 00000000fce00000-00000000fcffffff
+>
+>
+>
+>
+After pci_bridge_update_mappings:
+>
+>
+00000000fda00000-00000000fdbfffff (prio 1, RW): alias pci_bridge_mem
+>
+@pci_bridge_pci 00000000fda00000-00000000fdbfffff
+>
+>
+00000000fdc00000-00000000fddfffff (prio 1, RW): alias pci_bridge_mem
+>
+@pci_bridge_pci 00000000fdc00000-00000000fddfffff
+>
+>
+00000000fde00000-00000000fdffffff (prio 1, RW): alias pci_bridge_mem
+>
+@pci_bridge_pci 00000000fde00000-00000000fdffffff
+>
+>
+00000000fe000000-00000000fe1fffff (prio 1, RW): alias pci_bridge_mem
+>
+@pci_bridge_pci 00000000fe000000-00000000fe1fffff
+>
+>
+00000000fe200000-00000000fe3fffff (prio 1, RW): alias pci_bridge_mem
+>
+@pci_bridge_pci 00000000fe200000-00000000fe3fffff
+>
+>
+00000000fe400000-00000000fe5fffff (prio 1, RW): alias pci_bridge_mem
+>
+@pci_bridge_pci 00000000fe400000-00000000fe5fffff
+>
+>
+00000000fe600000-00000000fe7fffff (prio 1, RW): alias pci_bridge_mem
+>
+@pci_bridge_pci 00000000fe600000-00000000fe7fffff
+>
+>
+00000000fe800000-00000000fe9fffff (prio 1, RW): alias pci_bridge_mem
+>
+@pci_bridge_pci 00000000fe800000-00000000fe9fffff
+>
+>
+fffffffffc800000-fffffffffc800000 (prio 1, RW): alias
+>
+pci_bridge_pref_mem
+>
+@pci_bridge_pci fffffffffc800000-fffffffffc800000   <- Exceptional Adress
+>
+Space
+This one is empty though right?
+
+>
+>
+>
+We have figured out why this address becomes this value,  according to pci
+>
+spec,  pci driver can get BAR address size by writing 0xffffffff to
+>
+>
+the pci register firstly, and then read back the value from this register.
+OK however as you show below the BAR being sized is the BAR
+if a bridge. Are you then adding a bridge device by hotplug?
+
+
+
+>
+We didn't handle this value  specially while process pci write in qemu, the
+>
+function call stack is:
+>
+>
+Pci_bridge_dev_write_config
+>
+>
+-> pci_bridge_write_config
+>
+>
+-> pci_default_write_config (we update the config[address] value here to
+>
+fffffffffc800000, which should be 0xfc800000 )
+>
+>
+-> pci_bridge_update_mappings
+>
+>
+->pci_bridge_region_del(br, br->windows);
+>
+>
+-> pci_bridge_region_init
+>
+>
+->
+>
+pci_bridge_init_alias (here pci_bridge_get_base, we use the wrong value
+>
+fffffffffc800000)
+>
+>
+->
+>
+memory_region_transaction_commit
+>
+>
+>
+>
+So, as we can see, we use the wrong base address in qemu to update the memory
+>
+regions, though, we update the base address to
+>
+>
+The correct value after pci driver in VM write the original value back, the
+>
+virtio NIC in bus 4 may still sends net packets concurrently with
+>
+>
+The wrong memory region address.
+>
+>
+>
+>
+We have tried to skip the memory region update action in qemu while detect pci
+>
+write with 0xffffffff value, and it does work, but
+>
+>
+This seems to be not gently.
+For sure. But I'm still puzzled as to why does Linux try to
+size the BAR of the bridge while a device behind it is
+used.
+
+Can you pls post your QEMU command line?
+
+
+
+>
+>
+>
+diff --git a/hw/pci/pci_bridge.c b/hw/pci/pci_bridge.c
+>
+>
+index b2e50c3..84b405d 100644
+>
+>
+--- a/hw/pci/pci_bridge.c
+>
+>
++++ b/hw/pci/pci_bridge.c
+>
+>
+@@ -256,7 +256,8 @@ void pci_bridge_write_config(PCIDevice *d,
+>
+>
+pci_default_write_config(d, address, val, len);
+>
+>
+-    if (ranges_overlap(address, len, PCI_COMMAND, 2) ||
+>
+>
++    if ( (val != 0xffffffff) &&
+>
+>
++        (ranges_overlap(address, len, PCI_COMMAND, 2) ||
+>
+>
+/* io base/limit */
+>
+>
+ranges_overlap(address, len, PCI_IO_BASE, 2) ||
+>
+>
+@@ -266,7 +267,7 @@ void pci_bridge_write_config(PCIDevice *d,
+>
+>
+ranges_overlap(address, len, PCI_MEMORY_BASE, 20) ||
+>
+>
+/* vga enable */
+>
+>
+-        ranges_overlap(address, len, PCI_BRIDGE_CONTROL, 2)) {
+>
+>
++        ranges_overlap(address, len, PCI_BRIDGE_CONTROL, 2))) {
+>
+>
+pci_bridge_update_mappings(s);
+>
+>
+}
+>
+>
+>
+>
+Thinks,
+>
+>
+Xu
+>
+
+On Sat, Dec 08, 2018 at 11:58:59AM +0000, xuyandong wrote:
+>
+> Hi all,
+>
+>
+>
+>
+>
+>
+>
+> In our test, we configured VM with several pci-bridges and a
+>
+> virtio-net nic been attached with bus 4,
+>
+>
+>
+> After VM is startup, We ping this nic from host to judge if it is
+>
+> working normally. Then, we hot add pci devices to this VM with bus 0.
+>
+>
+>
+> We  found the virtio-net NIC in bus 4 is not working (can not connect)
+>
+> occasionally, as it kick virtio backend failure with error below:
+>
+>
+>
+>     Unassigned mem write 00000000fc803004 = 0x1
+>
+>
+>
+>
+>
+>
+>
+> memory-region: pci_bridge_pci
+>
+>
+>
+>   0000000000000000-ffffffffffffffff (prio 0, RW): pci_bridge_pci
+>
+>
+>
+>     00000000fc800000-00000000fc803fff (prio 1, RW): virtio-pci
+>
+>
+>
+>       00000000fc800000-00000000fc800fff (prio 0, RW):
+>
+> virtio-pci-common
+>
+>
+>
+>       00000000fc801000-00000000fc801fff (prio 0, RW): virtio-pci-isr
+>
+>
+>
+>       00000000fc802000-00000000fc802fff (prio 0, RW):
+>
+> virtio-pci-device
+>
+>
+>
+>       00000000fc803000-00000000fc803fff (prio 0, RW):
+>
+> virtio-pci-notify  <- io mem unassigned
+>
+>
+>
+>       …
+>
+>
+>
+>
+>
+>
+>
+> We caught an exceptional address changing while this problem happened,
+>
+> show as
+>
+> follow:
+>
+>
+>
+> Before pci_bridge_update_mappings:
+>
+>
+>
+>       00000000fc000000-00000000fc1fffff (prio 1, RW): alias
+>
+> pci_bridge_pref_mem @pci_bridge_pci 00000000fc000000-00000000fc1fffff
+>
+>
+>
+>       00000000fc200000-00000000fc3fffff (prio 1, RW): alias
+>
+> pci_bridge_pref_mem @pci_bridge_pci 00000000fc200000-00000000fc3fffff
+>
+>
+>
+>       00000000fc400000-00000000fc5fffff (prio 1, RW): alias
+>
+> pci_bridge_pref_mem @pci_bridge_pci 00000000fc400000-00000000fc5fffff
+>
+>
+>
+>       00000000fc600000-00000000fc7fffff (prio 1, RW): alias
+>
+> pci_bridge_pref_mem @pci_bridge_pci 00000000fc600000-00000000fc7fffff
+>
+>
+>
+>       00000000fc800000-00000000fc9fffff (prio 1, RW): alias
+>
+> pci_bridge_pref_mem @pci_bridge_pci 00000000fc800000-00000000fc9fffff
+>
+> <- correct Adress Spce
+>
+>
+>
+>       00000000fca00000-00000000fcbfffff (prio 1, RW): alias
+>
+> pci_bridge_pref_mem @pci_bridge_pci 00000000fca00000-00000000fcbfffff
+>
+>
+>
+>       00000000fcc00000-00000000fcdfffff (prio 1, RW): alias
+>
+> pci_bridge_pref_mem @pci_bridge_pci 00000000fcc00000-00000000fcdfffff
+>
+>
+>
+>       00000000fce00000-00000000fcffffff (prio 1, RW): alias
+>
+> pci_bridge_pref_mem @pci_bridge_pci 00000000fce00000-00000000fcffffff
+>
+>
+>
+>
+>
+>
+>
+> After pci_bridge_update_mappings:
+>
+>
+>
+>       00000000fda00000-00000000fdbfffff (prio 1, RW): alias
+>
+> pci_bridge_mem @pci_bridge_pci 00000000fda00000-00000000fdbfffff
+>
+>
+>
+>       00000000fdc00000-00000000fddfffff (prio 1, RW): alias
+>
+> pci_bridge_mem @pci_bridge_pci 00000000fdc00000-00000000fddfffff
+>
+>
+>
+>       00000000fde00000-00000000fdffffff (prio 1, RW): alias
+>
+> pci_bridge_mem @pci_bridge_pci 00000000fde00000-00000000fdffffff
+>
+>
+>
+>       00000000fe000000-00000000fe1fffff (prio 1, RW): alias
+>
+> pci_bridge_mem @pci_bridge_pci 00000000fe000000-00000000fe1fffff
+>
+>
+>
+>       00000000fe200000-00000000fe3fffff (prio 1, RW): alias
+>
+> pci_bridge_mem @pci_bridge_pci 00000000fe200000-00000000fe3fffff
+>
+>
+>
+>       00000000fe400000-00000000fe5fffff (prio 1, RW): alias
+>
+> pci_bridge_mem @pci_bridge_pci 00000000fe400000-00000000fe5fffff
+>
+>
+>
+>       00000000fe600000-00000000fe7fffff (prio 1, RW): alias
+>
+> pci_bridge_mem @pci_bridge_pci 00000000fe600000-00000000fe7fffff
+>
+>
+>
+>       00000000fe800000-00000000fe9fffff (prio 1, RW): alias
+>
+> pci_bridge_mem @pci_bridge_pci 00000000fe800000-00000000fe9fffff
+>
+>
+>
+>       fffffffffc800000-fffffffffc800000 (prio 1, RW): alias
+>
+> pci_bridge_pref_mem
+>
+> @pci_bridge_pci fffffffffc800000-fffffffffc800000   <- Exceptional Adress
+>
+Space
+>
+>
+This one is empty though right?
+>
+>
+>
+>
+>
+>
+> We have figured out why this address becomes this value,  according to
+>
+> pci spec,  pci driver can get BAR address size by writing 0xffffffff
+>
+> to
+>
+>
+>
+> the pci register firstly, and then read back the value from this register.
+>
+>
+>
+OK however as you show below the BAR being sized is the BAR if a bridge. Are
+>
+you then adding a bridge device by hotplug?
+No, I just simply hot plugged a VFIO device to Bus 0, another interesting 
+phenomenon is
+If I hot plug the device to other bus, this doesn't happened.
+ 
+>
+>
+>
+> We didn't handle this value  specially while process pci write in
+>
+> qemu, the function call stack is:
+>
+>
+>
+> Pci_bridge_dev_write_config
+>
+>
+>
+> -> pci_bridge_write_config
+>
+>
+>
+> -> pci_default_write_config (we update the config[address] value here
+>
+> -> to
+>
+> fffffffffc800000, which should be 0xfc800000 )
+>
+>
+>
+> -> pci_bridge_update_mappings
+>
+>
+>
+>                 ->pci_bridge_region_del(br, br->windows);
+>
+>
+>
+> -> pci_bridge_region_init
+>
+>
+>
+>                                                                 ->
+>
+> pci_bridge_init_alias (here pci_bridge_get_base, we use the wrong
+>
+> value
+>
+> fffffffffc800000)
+>
+>
+>
+>                                                 ->
+>
+> memory_region_transaction_commit
+>
+>
+>
+>
+>
+>
+>
+> So, as we can see, we use the wrong base address in qemu to update the
+>
+> memory regions, though, we update the base address to
+>
+>
+>
+> The correct value after pci driver in VM write the original value
+>
+> back, the virtio NIC in bus 4 may still sends net packets concurrently
+>
+> with
+>
+>
+>
+> The wrong memory region address.
+>
+>
+>
+>
+>
+>
+>
+> We have tried to skip the memory region update action in qemu while
+>
+> detect pci write with 0xffffffff value, and it does work, but
+>
+>
+>
+> This seems to be not gently.
+>
+>
+For sure. But I'm still puzzled as to why does Linux try to size the BAR of
+>
+the
+>
+bridge while a device behind it is used.
+>
+>
+Can you pls post your QEMU command line?
+My QEMU command line:
+/root/xyd/qemu-system-x86_64 -name guest=Linux,debug-threads=on -S -object 
+secret,id=masterKey0,format=raw,file=/var/run/libvirt/qemu/domain-194-Linux/master-key.aes
+ -machine pc-i440fx-2.8,accel=kvm,usb=off,dump-guest-core=off -cpu 
+host,+kvm_pv_eoi -bios /usr/share/OVMF/OVMF.fd -m 
+size=4194304k,slots=256,maxmem=33554432k -realtime mlock=off -smp 
+20,sockets=20,cores=1,threads=1 -numa node,nodeid=0,cpus=0-4,mem=1024 -numa 
+node,nodeid=1,cpus=5-9,mem=1024 -numa node,nodeid=2,cpus=10-14,mem=1024 -numa 
+node,nodeid=3,cpus=15-19,mem=1024 -uuid 34a588c7-b0f2-4952-b39c-47fae3411439 
+-no-user-config -nodefaults -chardev 
+socket,id=charmonitor,path=/var/run/libvirt/qemu/domain-194-Linux/monitor.sock,server,nowait
+ -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-hpet 
+-global kvm-pit.lost_tick_policy=delay -no-shutdown -boot strict=on -device 
+pci-bridge,chassis_nr=1,id=pci.1,bus=pci.0,addr=0x8 -device 
+pci-bridge,chassis_nr=2,id=pci.2,bus=pci.0,addr=0x9 -device 
+pci-bridge,chassis_nr=3,id=pci.3,bus=pci.0,addr=0xa -device 
+pci-bridge,chassis_nr=4,id=pci.4,bus=pci.0,addr=0xb -device 
+pci-bridge,chassis_nr=5,id=pci.5,bus=pci.0,addr=0xc -device 
+piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device 
+usb-ehci,id=usb1,bus=pci.0,addr=0x10 -device 
+nec-usb-xhci,id=usb2,bus=pci.0,addr=0x11 -device 
+virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x3 -device 
+virtio-scsi-pci,id=scsi1,bus=pci.0,addr=0x4 -device 
+virtio-scsi-pci,id=scsi2,bus=pci.0,addr=0x5 -device 
+virtio-scsi-pci,id=scsi3,bus=pci.0,addr=0x6 -device 
+virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x7 -drive 
+file=/mnt/sdb/xml/centos_74_x64_uefi.raw,format=raw,if=none,id=drive-virtio-disk0,cache=none
+ -device 
+virtio-blk-pci,scsi=off,bus=pci.0,addr=0x2,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
+ -drive if=none,id=drive-ide0-1-1,readonly=on,cache=none -device 
+ide-cd,bus=ide.1,unit=1,drive=drive-ide0-1-1,id=ide0-1-1 -netdev 
+tap,fd=35,id=hostnet0 -device 
+virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:89:5d:8b,bus=pci.4,addr=0x1 
+-chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 
+-device usb-tablet,id=input0,bus=usb.0,port=1 -vnc 0.0.0.0:0 -device 
+cirrus-vga,id=video0,vgamem_mb=8,bus=pci.0,addr=0x12 -device 
+virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xd -msg timestamp=on
+
+I am also very curious about this issue, in the linux kernel code, maybe double 
+check in function pci_bridge_check_ranges triggered this problem.
+
+
+>
+>
+>
+>
+>
+>
+>
+>
+> diff --git a/hw/pci/pci_bridge.c b/hw/pci/pci_bridge.c
+>
+>
+>
+> index b2e50c3..84b405d 100644
+>
+>
+>
+> --- a/hw/pci/pci_bridge.c
+>
+>
+>
+> +++ b/hw/pci/pci_bridge.c
+>
+>
+>
+> @@ -256,7 +256,8 @@ void pci_bridge_write_config(PCIDevice *d,
+>
+>
+>
+>      pci_default_write_config(d, address, val, len);
+>
+>
+>
+> -    if (ranges_overlap(address, len, PCI_COMMAND, 2) ||
+>
+>
+>
+> +    if ( (val != 0xffffffff) &&
+>
+>
+>
+> +        (ranges_overlap(address, len, PCI_COMMAND, 2) ||
+>
+>
+>
+>          /* io base/limit */
+>
+>
+>
+>          ranges_overlap(address, len, PCI_IO_BASE, 2) ||
+>
+>
+>
+> @@ -266,7 +267,7 @@ void pci_bridge_write_config(PCIDevice *d,
+>
+>
+>
+>          ranges_overlap(address, len, PCI_MEMORY_BASE, 20) ||
+>
+>
+>
+>          /* vga enable */
+>
+>
+>
+> -        ranges_overlap(address, len, PCI_BRIDGE_CONTROL, 2)) {
+>
+>
+>
+> +        ranges_overlap(address, len, PCI_BRIDGE_CONTROL, 2))) {
+>
+>
+>
+>          pci_bridge_update_mappings(s);
+>
+>
+>
+>      }
+>
+>
+>
+>
+>
+>
+>
+> Thinks,
+>
+>
+>
+> Xu
+>
+>
+
+On Mon, Dec 10, 2018 at 03:12:53AM +0000, xuyandong wrote:
+>
+On Sat, Dec 08, 2018 at 11:58:59AM +0000, xuyandong wrote:
+>
+> > Hi all,
+>
+> >
+>
+> >
+>
+> >
+>
+> > In our test, we configured VM with several pci-bridges and a
+>
+> > virtio-net nic been attached with bus 4,
+>
+> >
+>
+> > After VM is startup, We ping this nic from host to judge if it is
+>
+> > working normally. Then, we hot add pci devices to this VM with bus 0.
+>
+> >
+>
+> > We  found the virtio-net NIC in bus 4 is not working (can not connect)
+>
+> > occasionally, as it kick virtio backend failure with error below:
+>
+> >
+>
+> >     Unassigned mem write 00000000fc803004 = 0x1
+>
+> >
+>
+> >
+>
+> >
+>
+> > memory-region: pci_bridge_pci
+>
+> >
+>
+> >   0000000000000000-ffffffffffffffff (prio 0, RW): pci_bridge_pci
+>
+> >
+>
+> >     00000000fc800000-00000000fc803fff (prio 1, RW): virtio-pci
+>
+> >
+>
+> >       00000000fc800000-00000000fc800fff (prio 0, RW):
+>
+> > virtio-pci-common
+>
+> >
+>
+> >       00000000fc801000-00000000fc801fff (prio 0, RW): virtio-pci-isr
+>
+> >
+>
+> >       00000000fc802000-00000000fc802fff (prio 0, RW):
+>
+> > virtio-pci-device
+>
+> >
+>
+> >       00000000fc803000-00000000fc803fff (prio 0, RW):
+>
+> > virtio-pci-notify  <- io mem unassigned
+>
+> >
+>
+> >       …
+>
+> >
+>
+> >
+>
+> >
+>
+> > We caught an exceptional address changing while this problem happened,
+>
+> > show as
+>
+> > follow:
+>
+> >
+>
+> > Before pci_bridge_update_mappings:
+>
+> >
+>
+> >       00000000fc000000-00000000fc1fffff (prio 1, RW): alias
+>
+> > pci_bridge_pref_mem @pci_bridge_pci 00000000fc000000-00000000fc1fffff
+>
+> >
+>
+> >       00000000fc200000-00000000fc3fffff (prio 1, RW): alias
+>
+> > pci_bridge_pref_mem @pci_bridge_pci 00000000fc200000-00000000fc3fffff
+>
+> >
+>
+> >       00000000fc400000-00000000fc5fffff (prio 1, RW): alias
+>
+> > pci_bridge_pref_mem @pci_bridge_pci 00000000fc400000-00000000fc5fffff
+>
+> >
+>
+> >       00000000fc600000-00000000fc7fffff (prio 1, RW): alias
+>
+> > pci_bridge_pref_mem @pci_bridge_pci 00000000fc600000-00000000fc7fffff
+>
+> >
+>
+> >       00000000fc800000-00000000fc9fffff (prio 1, RW): alias
+>
+> > pci_bridge_pref_mem @pci_bridge_pci 00000000fc800000-00000000fc9fffff
+>
+> > <- correct Adress Spce
+>
+> >
+>
+> >       00000000fca00000-00000000fcbfffff (prio 1, RW): alias
+>
+> > pci_bridge_pref_mem @pci_bridge_pci 00000000fca00000-00000000fcbfffff
+>
+> >
+>
+> >       00000000fcc00000-00000000fcdfffff (prio 1, RW): alias
+>
+> > pci_bridge_pref_mem @pci_bridge_pci 00000000fcc00000-00000000fcdfffff
+>
+> >
+>
+> >       00000000fce00000-00000000fcffffff (prio 1, RW): alias
+>
+> > pci_bridge_pref_mem @pci_bridge_pci 00000000fce00000-00000000fcffffff
+>
+> >
+>
+> >
+>
+> >
+>
+> > After pci_bridge_update_mappings:
+>
+> >
+>
+> >       00000000fda00000-00000000fdbfffff (prio 1, RW): alias
+>
+> > pci_bridge_mem @pci_bridge_pci 00000000fda00000-00000000fdbfffff
+>
+> >
+>
+> >       00000000fdc00000-00000000fddfffff (prio 1, RW): alias
+>
+> > pci_bridge_mem @pci_bridge_pci 00000000fdc00000-00000000fddfffff
+>
+> >
+>
+> >       00000000fde00000-00000000fdffffff (prio 1, RW): alias
+>
+> > pci_bridge_mem @pci_bridge_pci 00000000fde00000-00000000fdffffff
+>
+> >
+>
+> >       00000000fe000000-00000000fe1fffff (prio 1, RW): alias
+>
+> > pci_bridge_mem @pci_bridge_pci 00000000fe000000-00000000fe1fffff
+>
+> >
+>
+> >       00000000fe200000-00000000fe3fffff (prio 1, RW): alias
+>
+> > pci_bridge_mem @pci_bridge_pci 00000000fe200000-00000000fe3fffff
+>
+> >
+>
+> >       00000000fe400000-00000000fe5fffff (prio 1, RW): alias
+>
+> > pci_bridge_mem @pci_bridge_pci 00000000fe400000-00000000fe5fffff
+>
+> >
+>
+> >       00000000fe600000-00000000fe7fffff (prio 1, RW): alias
+>
+> > pci_bridge_mem @pci_bridge_pci 00000000fe600000-00000000fe7fffff
+>
+> >
+>
+> >       00000000fe800000-00000000fe9fffff (prio 1, RW): alias
+>
+> > pci_bridge_mem @pci_bridge_pci 00000000fe800000-00000000fe9fffff
+>
+> >
+>
+> >       fffffffffc800000-fffffffffc800000 (prio 1, RW): alias
+>
+> > pci_bridge_pref_mem
+>
+> > @pci_bridge_pci fffffffffc800000-fffffffffc800000   <- Exceptional Adress
+>
+> Space
+>
+>
+>
+> This one is empty though right?
+>
+>
+>
+> >
+>
+> >
+>
+> > We have figured out why this address becomes this value,  according to
+>
+> > pci spec,  pci driver can get BAR address size by writing 0xffffffff
+>
+> > to
+>
+> >
+>
+> > the pci register firstly, and then read back the value from this register.
+>
+>
+>
+>
+>
+> OK however as you show below the BAR being sized is the BAR if a bridge. Are
+>
+> you then adding a bridge device by hotplug?
+>
+>
+No, I just simply hot plugged a VFIO device to Bus 0, another interesting
+>
+phenomenon is
+>
+If I hot plug the device to other bus, this doesn't happened.
+>
+>
+>
+>
+>
+>
+> > We didn't handle this value  specially while process pci write in
+>
+> > qemu, the function call stack is:
+>
+> >
+>
+> > Pci_bridge_dev_write_config
+>
+> >
+>
+> > -> pci_bridge_write_config
+>
+> >
+>
+> > -> pci_default_write_config (we update the config[address] value here
+>
+> > -> to
+>
+> > fffffffffc800000, which should be 0xfc800000 )
+>
+> >
+>
+> > -> pci_bridge_update_mappings
+>
+> >
+>
+> >                 ->pci_bridge_region_del(br, br->windows);
+>
+> >
+>
+> > -> pci_bridge_region_init
+>
+> >
+>
+> >                                                                 ->
+>
+> > pci_bridge_init_alias (here pci_bridge_get_base, we use the wrong
+>
+> > value
+>
+> > fffffffffc800000)
+>
+> >
+>
+> >                                                 ->
+>
+> > memory_region_transaction_commit
+>
+> >
+>
+> >
+>
+> >
+>
+> > So, as we can see, we use the wrong base address in qemu to update the
+>
+> > memory regions, though, we update the base address to
+>
+> >
+>
+> > The correct value after pci driver in VM write the original value
+>
+> > back, the virtio NIC in bus 4 may still sends net packets concurrently
+>
+> > with
+>
+> >
+>
+> > The wrong memory region address.
+>
+> >
+>
+> >
+>
+> >
+>
+> > We have tried to skip the memory region update action in qemu while
+>
+> > detect pci write with 0xffffffff value, and it does work, but
+>
+> >
+>
+> > This seems to be not gently.
+>
+>
+>
+> For sure. But I'm still puzzled as to why does Linux try to size the BAR of
+>
+> the
+>
+> bridge while a device behind it is used.
+>
+>
+>
+> Can you pls post your QEMU command line?
+>
+>
+My QEMU command line:
+>
+/root/xyd/qemu-system-x86_64 -name guest=Linux,debug-threads=on -S -object
+>
+secret,id=masterKey0,format=raw,file=/var/run/libvirt/qemu/domain-194-Linux/master-key.aes
+>
+-machine pc-i440fx-2.8,accel=kvm,usb=off,dump-guest-core=off -cpu
+>
+host,+kvm_pv_eoi -bios /usr/share/OVMF/OVMF.fd -m
+>
+size=4194304k,slots=256,maxmem=33554432k -realtime mlock=off -smp
+>
+20,sockets=20,cores=1,threads=1 -numa node,nodeid=0,cpus=0-4,mem=1024 -numa
+>
+node,nodeid=1,cpus=5-9,mem=1024 -numa node,nodeid=2,cpus=10-14,mem=1024 -numa
+>
+node,nodeid=3,cpus=15-19,mem=1024 -uuid 34a588c7-b0f2-4952-b39c-47fae3411439
+>
+-no-user-config -nodefaults -chardev
+>
+socket,id=charmonitor,path=/var/run/libvirt/qemu/domain-194-Linux/monitor.sock,server,nowait
+>
+-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-hpet
+>
+-global kvm-pit.lost_tick_policy=delay -no-shutdown -boot strict=on -device
+>
+pci-bridge,chassis_nr=1,id=pci.1,bus=pci.0,addr=0x8 -device
+>
+pci-bridge,chassis_nr=2,id=pci.2,bus=pci.0,addr=0x9 -device
+>
+pci-bridge,chassis_nr=3,id=pci.3,bus=pci.0,addr=0xa -device
+>
+pci-bridge,chassis_nr=4,id=pci.4,bus=pci.0,addr=0xb -device
+>
+pci-bridge,chassis_nr=5,id=pci.5,bus=pci.0,addr=0xc -device
+>
+piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
+>
+usb-ehci,id=usb1,bus=pci.0,addr=0x10 -device
+>
+nec-usb-xhci,id=usb2,bus=pci.0,addr=0x11 -device
+>
+virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x3 -device
+>
+virtio-scsi-pci,id=scsi1,bus=pci.0,addr=0x4 -device
+>
+virtio-scsi-pci,id=scsi2,bus=pci.0,addr=0x5 -device
+>
+virtio-scsi-pci,id=scsi3,bus=pci.0,addr=0x6 -device
+>
+virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x7 -drive
+>
+file=/mnt/sdb/xml/centos_74_x64_uefi.raw,format=raw,if=none,id=drive-virtio-disk0,cache=none
+>
+-device
+>
+virtio-blk-pci,scsi=off,bus=pci.0,addr=0x2,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
+>
+-drive if=none,id=drive-ide0-1-1,readonly=on,cache=none -device
+>
+ide-cd,bus=ide.1,unit=1,drive=drive-ide0-1-1,id=ide0-1-1 -netdev
+>
+tap,fd=35,id=hostnet0 -device
+>
+virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:89:5d:8b,bus=pci.4,addr=0x1
+>
+-chardev pty,id=charserial0 -device
+>
+isa-serial,chardev=charserial0,id=serial0 -device
+>
+usb-tablet,id=input0,bus=usb.0,port=1 -vnc 0.0.0.0:0 -device
+>
+cirrus-vga,id=video0,vgamem_mb=8,bus=pci.0,addr=0x12 -device
+>
+virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xd -msg timestamp=on
+>
+>
+I am also very curious about this issue, in the linux kernel code, maybe
+>
+double check in function pci_bridge_check_ranges triggered this problem.
+If you can get the stacktrace in Linux when it tries to write this
+fffff value, that would be quite helpful.
+
+
+>
+>
+>
+>
+>
+>
+>
+>
+> >
+>
+> >
+>
+> > diff --git a/hw/pci/pci_bridge.c b/hw/pci/pci_bridge.c
+>
+> >
+>
+> > index b2e50c3..84b405d 100644
+>
+> >
+>
+> > --- a/hw/pci/pci_bridge.c
+>
+> >
+>
+> > +++ b/hw/pci/pci_bridge.c
+>
+> >
+>
+> > @@ -256,7 +256,8 @@ void pci_bridge_write_config(PCIDevice *d,
+>
+> >
+>
+> >      pci_default_write_config(d, address, val, len);
+>
+> >
+>
+> > -    if (ranges_overlap(address, len, PCI_COMMAND, 2) ||
+>
+> >
+>
+> > +    if ( (val != 0xffffffff) &&
+>
+> >
+>
+> > +        (ranges_overlap(address, len, PCI_COMMAND, 2) ||
+>
+> >
+>
+> >          /* io base/limit */
+>
+> >
+>
+> >          ranges_overlap(address, len, PCI_IO_BASE, 2) ||
+>
+> >
+>
+> > @@ -266,7 +267,7 @@ void pci_bridge_write_config(PCIDevice *d,
+>
+> >
+>
+> >          ranges_overlap(address, len, PCI_MEMORY_BASE, 20) ||
+>
+> >
+>
+> >          /* vga enable */
+>
+> >
+>
+> > -        ranges_overlap(address, len, PCI_BRIDGE_CONTROL, 2)) {
+>
+> >
+>
+> > +        ranges_overlap(address, len, PCI_BRIDGE_CONTROL, 2))) {
+>
+> >
+>
+> >          pci_bridge_update_mappings(s);
+>
+> >
+>
+> >      }
+>
+> >
+>
+> >
+>
+> >
+>
+> > Thinks,
+>
+> >
+>
+> > Xu
+>
+> >
+
+On Sat, Dec 08, 2018 at 11:58:59AM +0000, xuyandong wrote:
+>
+> > > Hi all,
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > > In our test, we configured VM with several pci-bridges and a
+>
+> > > virtio-net nic been attached with bus 4,
+>
+> > >
+>
+> > > After VM is startup, We ping this nic from host to judge if it is
+>
+> > > working normally. Then, we hot add pci devices to this VM with bus 0.
+>
+> > >
+>
+> > > We  found the virtio-net NIC in bus 4 is not working (can not
+>
+> > > connect) occasionally, as it kick virtio backend failure with error
+>
+> > > below:
+>
+> > >
+>
+> > >     Unassigned mem write 00000000fc803004 = 0x1
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > > memory-region: pci_bridge_pci
+>
+> > >
+>
+> > >   0000000000000000-ffffffffffffffff (prio 0, RW): pci_bridge_pci
+>
+> > >
+>
+> > >     00000000fc800000-00000000fc803fff (prio 1, RW): virtio-pci
+>
+> > >
+>
+> > >       00000000fc800000-00000000fc800fff (prio 0, RW):
+>
+> > > virtio-pci-common
+>
+> > >
+>
+> > >       00000000fc801000-00000000fc801fff (prio 0, RW):
+>
+> > > virtio-pci-isr
+>
+> > >
+>
+> > >       00000000fc802000-00000000fc802fff (prio 0, RW):
+>
+> > > virtio-pci-device
+>
+> > >
+>
+> > >       00000000fc803000-00000000fc803fff (prio 0, RW):
+>
+> > > virtio-pci-notify  <- io mem unassigned
+>
+> > >
+>
+> > >       …
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > > We caught an exceptional address changing while this problem
+>
+> > > happened, show as
+>
+> > > follow:
+>
+> > >
+>
+> > > Before pci_bridge_update_mappings:
+>
+> > >
+>
+> > >       00000000fc000000-00000000fc1fffff (prio 1, RW): alias
+>
+> > > pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > 00000000fc000000-00000000fc1fffff
+>
+> > >
+>
+> > >       00000000fc200000-00000000fc3fffff (prio 1, RW): alias
+>
+> > > pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > 00000000fc200000-00000000fc3fffff
+>
+> > >
+>
+> > >       00000000fc400000-00000000fc5fffff (prio 1, RW): alias
+>
+> > > pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > 00000000fc400000-00000000fc5fffff
+>
+> > >
+>
+> > >       00000000fc600000-00000000fc7fffff (prio 1, RW): alias
+>
+> > > pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > 00000000fc600000-00000000fc7fffff
+>
+> > >
+>
+> > >       00000000fc800000-00000000fc9fffff (prio 1, RW): alias
+>
+> > > pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > 00000000fc800000-00000000fc9fffff
+>
+> > > <- correct Adress Spce
+>
+> > >
+>
+> > >       00000000fca00000-00000000fcbfffff (prio 1, RW): alias
+>
+> > > pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > 00000000fca00000-00000000fcbfffff
+>
+> > >
+>
+> > >       00000000fcc00000-00000000fcdfffff (prio 1, RW): alias
+>
+> > > pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > 00000000fcc00000-00000000fcdfffff
+>
+> > >
+>
+> > >       00000000fce00000-00000000fcffffff (prio 1, RW): alias
+>
+> > > pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > 00000000fce00000-00000000fcffffff
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > > After pci_bridge_update_mappings:
+>
+> > >
+>
+> > >       00000000fda00000-00000000fdbfffff (prio 1, RW): alias
+>
+> > > pci_bridge_mem @pci_bridge_pci 00000000fda00000-00000000fdbfffff
+>
+> > >
+>
+> > >       00000000fdc00000-00000000fddfffff (prio 1, RW): alias
+>
+> > > pci_bridge_mem @pci_bridge_pci 00000000fdc00000-00000000fddfffff
+>
+> > >
+>
+> > >       00000000fde00000-00000000fdffffff (prio 1, RW): alias
+>
+> > > pci_bridge_mem @pci_bridge_pci 00000000fde00000-00000000fdffffff
+>
+> > >
+>
+> > >       00000000fe000000-00000000fe1fffff (prio 1, RW): alias
+>
+> > > pci_bridge_mem @pci_bridge_pci 00000000fe000000-00000000fe1fffff
+>
+> > >
+>
+> > >       00000000fe200000-00000000fe3fffff (prio 1, RW): alias
+>
+> > > pci_bridge_mem @pci_bridge_pci 00000000fe200000-00000000fe3fffff
+>
+> > >
+>
+> > >       00000000fe400000-00000000fe5fffff (prio 1, RW): alias
+>
+> > > pci_bridge_mem @pci_bridge_pci 00000000fe400000-00000000fe5fffff
+>
+> > >
+>
+> > >       00000000fe600000-00000000fe7fffff (prio 1, RW): alias
+>
+> > > pci_bridge_mem @pci_bridge_pci 00000000fe600000-00000000fe7fffff
+>
+> > >
+>
+> > >       00000000fe800000-00000000fe9fffff (prio 1, RW): alias
+>
+> > > pci_bridge_mem @pci_bridge_pci 00000000fe800000-00000000fe9fffff
+>
+> > >
+>
+> > >       fffffffffc800000-fffffffffc800000 (prio 1, RW): alias
+>
+pci_bridge_pref_mem
+>
+> > > @pci_bridge_pci fffffffffc800000-fffffffffc800000   <- Exceptional
+>
+> > > Adress
+>
+> > Space
+>
+> >
+>
+> > This one is empty though right?
+>
+> >
+>
+> > >
+>
+> > >
+>
+> > > We have figured out why this address becomes this value,
+>
+> > > according to pci spec,  pci driver can get BAR address size by
+>
+> > > writing 0xffffffff to
+>
+> > >
+>
+> > > the pci register firstly, and then read back the value from this
+>
+> > > register.
+>
+> >
+>
+> >
+>
+> > OK however as you show below the BAR being sized is the BAR if a
+>
+> > bridge. Are you then adding a bridge device by hotplug?
+>
+>
+>
+> No, I just simply hot plugged a VFIO device to Bus 0, another
+>
+> interesting phenomenon is If I hot plug the device to other bus, this
+>
+> doesn't
+>
+happened.
+>
+>
+>
+> >
+>
+> >
+>
+> > > We didn't handle this value  specially while process pci write in
+>
+> > > qemu, the function call stack is:
+>
+> > >
+>
+> > > Pci_bridge_dev_write_config
+>
+> > >
+>
+> > > -> pci_bridge_write_config
+>
+> > >
+>
+> > > -> pci_default_write_config (we update the config[address] value
+>
+> > > -> here to
+>
+> > > fffffffffc800000, which should be 0xfc800000 )
+>
+> > >
+>
+> > > -> pci_bridge_update_mappings
+>
+> > >
+>
+> > >                 ->pci_bridge_region_del(br, br->windows);
+>
+> > >
+>
+> > > -> pci_bridge_region_init
+>
+> > >
+>
+> > >                                                                 ->
+>
+> > > pci_bridge_init_alias (here pci_bridge_get_base, we use the wrong
+>
+> > > value
+>
+> > > fffffffffc800000)
+>
+> > >
+>
+> > >                                                 ->
+>
+> > > memory_region_transaction_commit
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > > So, as we can see, we use the wrong base address in qemu to update
+>
+> > > the memory regions, though, we update the base address to
+>
+> > >
+>
+> > > The correct value after pci driver in VM write the original value
+>
+> > > back, the virtio NIC in bus 4 may still sends net packets
+>
+> > > concurrently with
+>
+> > >
+>
+> > > The wrong memory region address.
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > > We have tried to skip the memory region update action in qemu
+>
+> > > while detect pci write with 0xffffffff value, and it does work,
+>
+> > > but
+>
+> > >
+>
+> > > This seems to be not gently.
+>
+> >
+>
+> > For sure. But I'm still puzzled as to why does Linux try to size the
+>
+> > BAR of the bridge while a device behind it is used.
+>
+> >
+>
+> > Can you pls post your QEMU command line?
+>
+>
+>
+> My QEMU command line:
+>
+> /root/xyd/qemu-system-x86_64 -name guest=Linux,debug-threads=on -S
+>
+> -object
+>
+> secret,id=masterKey0,format=raw,file=/var/run/libvirt/qemu/domain-194-
+>
+> Linux/master-key.aes -machine
+>
+> pc-i440fx-2.8,accel=kvm,usb=off,dump-guest-core=off -cpu
+>
+> host,+kvm_pv_eoi -bios /usr/share/OVMF/OVMF.fd -m
+>
+> size=4194304k,slots=256,maxmem=33554432k -realtime mlock=off -smp
+>
+> 20,sockets=20,cores=1,threads=1 -numa node,nodeid=0,cpus=0-4,mem=1024
+>
+> -numa node,nodeid=1,cpus=5-9,mem=1024 -numa
+>
+> node,nodeid=2,cpus=10-14,mem=1024 -numa
+>
+> node,nodeid=3,cpus=15-19,mem=1024 -uuid
+>
+> 34a588c7-b0f2-4952-b39c-47fae3411439 -no-user-config -nodefaults
+>
+> -chardev
+>
+> socket,id=charmonitor,path=/var/run/libvirt/qemu/domain-194-Linux/moni
+>
+> tor.sock,server,nowait -mon
+>
+> chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-hpet
+>
+> -global kvm-pit.lost_tick_policy=delay -no-shutdown -boot strict=on
+>
+> -device pci-bridge,chassis_nr=1,id=pci.1,bus=pci.0,addr=0x8 -device
+>
+> pci-bridge,chassis_nr=2,id=pci.2,bus=pci.0,addr=0x9 -device
+>
+> pci-bridge,chassis_nr=3,id=pci.3,bus=pci.0,addr=0xa -device
+>
+> pci-bridge,chassis_nr=4,id=pci.4,bus=pci.0,addr=0xb -device
+>
+> pci-bridge,chassis_nr=5,id=pci.5,bus=pci.0,addr=0xc -device
+>
+> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
+>
+> usb-ehci,id=usb1,bus=pci.0,addr=0x10 -device
+>
+> nec-usb-xhci,id=usb2,bus=pci.0,addr=0x11 -device
+>
+> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x3 -device
+>
+> virtio-scsi-pci,id=scsi1,bus=pci.0,addr=0x4 -device
+>
+> virtio-scsi-pci,id=scsi2,bus=pci.0,addr=0x5 -device
+>
+> virtio-scsi-pci,id=scsi3,bus=pci.0,addr=0x6 -device
+>
+> virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x7 -drive
+>
+> file=/mnt/sdb/xml/centos_74_x64_uefi.raw,format=raw,if=none,id=drive-v
+>
+> irtio-disk0,cache=none -device
+>
+> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x2,drive=drive-virtio-disk0,id
+>
+> =virtio-disk0,bootindex=1 -drive
+>
+> if=none,id=drive-ide0-1-1,readonly=on,cache=none -device
+>
+> ide-cd,bus=ide.1,unit=1,drive=drive-ide0-1-1,id=ide0-1-1 -netdev
+>
+> tap,fd=35,id=hostnet0 -device
+>
+> virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:89:5d:8b,bus=pci.4
+>
+> ,addr=0x1 -chardev pty,id=charserial0 -device
+>
+> isa-serial,chardev=charserial0,id=serial0 -device
+>
+> usb-tablet,id=input0,bus=usb.0,port=1 -vnc 0.0.0.0:0 -device
+>
+> cirrus-vga,id=video0,vgamem_mb=8,bus=pci.0,addr=0x12 -device
+>
+> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xd -msg timestamp=on
+>
+>
+>
+> I am also very curious about this issue, in the linux kernel code, maybe
+>
+> double
+>
+check in function pci_bridge_check_ranges triggered this problem.
+>
+>
+If you can get the stacktrace in Linux when it tries to write this fffff
+>
+value, that
+>
+would be quite helpful.
+>
+After I add mdelay(100) in function pci_bridge_check_ranges, this phenomenon is
+easier to reproduce, below is my modify in kernel:
+diff --git a/drivers/pci/setup-bus.c b/drivers/pci/setup-bus.c
+index cb389277..86e232d 100644
+--- a/drivers/pci/setup-bus.c
++++ b/drivers/pci/setup-bus.c
+@@ -27,7 +27,7 @@
+ #include <linux/slab.h>
+ #include <linux/acpi.h>
+ #include "pci.h"
+-
++#include <linux/delay.h>
+ unsigned int pci_flags;
+ 
+ struct pci_dev_resource {
+@@ -787,6 +787,9 @@ static void pci_bridge_check_ranges(struct pci_bus *bus)
+                pci_write_config_dword(bridge, PCI_PREF_BASE_UPPER32,
+                                               0xffffffff);
+                pci_read_config_dword(bridge, PCI_PREF_BASE_UPPER32, &tmp);
++               mdelay(100);
++               printk(KERN_ERR "sleep\n");
++                dump_stack();
+                if (!tmp)
+                        b_res[2].flags &= ~IORESOURCE_MEM_64;
+                pci_write_config_dword(bridge, PCI_PREF_BASE_UPPER32,
+
+After hot plugging, we get the following log:
+
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:14.0: BAR 0: assigned [mem 
+0xc2360000-0xc237ffff 64bit pref]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:14.0: BAR 3: assigned [mem 
+0xc2328000-0xc232bfff 64bit pref]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:08.0: PCI bridge to [bus 01]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:08.0:   bridge window [io  
+0xf000-0xffff]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2800000-0xc29fffff]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2b00000-0xc2cfffff 64bit pref]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:09.0: PCI bridge to [bus 02]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:09.0:   bridge window [io  
+0xe000-0xefff]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2600000-0xc27fffff]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2d00000-0xc2efffff 64bit pref]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:0a.0: PCI bridge to [bus 03]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [io  
+0xd000-0xdfff]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2400000-0xc25fffff]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2f00000-0xc30fffff 64bit pref]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:04:0c.0: PCI bridge to [bus 05]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [io  
+0xc000-0xcfff]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [mem 
+0xc2000000-0xc21fffff]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:08.0: PCI bridge to [bus 01]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:08.0:   bridge window [io  
+0xf000-0xffff]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2800000-0xc29fffff]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2b00000-0xc2cfffff 64bit pref]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:09.0: PCI bridge to [bus 02]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:09.0:   bridge window [io  
+0xe000-0xefff]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2600000-0xc27fffff]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2d00000-0xc2efffff 64bit pref]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:0a.0: PCI bridge to [bus 03]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [io  
+0xd000-0xdfff]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2400000-0xc25fffff]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2f00000-0xc30fffff 64bit pref]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:04:0c.0: PCI bridge to [bus 05]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [io  
+0xc000-0xcfff]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [mem 
+0xc2000000-0xc21fffff]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:08.0: PCI bridge to [bus 01]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:08.0:   bridge window [io  
+0xf000-0xffff]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2800000-0xc29fffff]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2b00000-0xc2cfffff 64bit pref]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:09.0: PCI bridge to [bus 02]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:09.0:   bridge window [io  
+0xe000-0xefff]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2600000-0xc27fffff]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2d00000-0xc2efffff 64bit pref]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:0a.0: PCI bridge to [bus 03]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [io  
+0xd000-0xdfff]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2400000-0xc25fffff]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2f00000-0xc30fffff 64bit pref]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:04:0c.0: PCI bridge to [bus 05]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [io  
+0xc000-0xcfff]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [mem 
+0xc2000000-0xc21fffff]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:08.0: PCI bridge to [bus 01]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:08.0:   bridge window [io  
+0xf000-0xffff]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2800000-0xc29fffff]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2b00000-0xc2cfffff 64bit pref]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:09.0: PCI bridge to [bus 02]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:09.0:   bridge window [io  
+0xe000-0xefff]
+Dec 11 09:28:16 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2600000-0xc27fffff]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2d00000-0xc2efffff 64bit pref]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:0a.0: PCI bridge to [bus 03]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [io  
+0xd000-0xdfff]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2400000-0xc25fffff]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2f00000-0xc30fffff 64bit pref]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:04:0c.0: PCI bridge to [bus 05]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [io  
+0xc000-0xcfff]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [mem 
+0xc2000000-0xc21fffff]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:08.0: PCI bridge to [bus 01]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:08.0:   bridge window [io  
+0xf000-0xffff]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2800000-0xc29fffff]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2b00000-0xc2cfffff 64bit pref]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:09.0: PCI bridge to [bus 02]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:09.0:   bridge window [io  
+0xe000-0xefff]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2600000-0xc27fffff]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2d00000-0xc2efffff 64bit pref]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:0a.0: PCI bridge to [bus 03]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [io  
+0xd000-0xdfff]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2400000-0xc25fffff]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2f00000-0xc30fffff 64bit pref]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:04:0c.0: PCI bridge to [bus 05]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [io  
+0xc000-0xcfff]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [mem 
+0xc2000000-0xc21fffff]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:08.0: PCI bridge to [bus 01]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:08.0:   bridge window [io  
+0xf000-0xffff]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2800000-0xc29fffff]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2b00000-0xc2cfffff 64bit pref]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:09.0: PCI bridge to [bus 02]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:09.0:   bridge window [io  
+0xe000-0xefff]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2600000-0xc27fffff]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2d00000-0xc2efffff 64bit pref]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:0a.0: PCI bridge to [bus 03]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [io  
+0xd000-0xdfff]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2400000-0xc25fffff]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2f00000-0xc30fffff 64bit pref]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:04:0c.0: PCI bridge to [bus 05]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [io  
+0xc000-0xcfff]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [mem 
+0xc2000000-0xc21fffff]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:08.0: PCI bridge to [bus 01]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:08.0:   bridge window [io  
+0xf000-0xffff]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2800000-0xc29fffff]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2b00000-0xc2cfffff 64bit pref]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:09.0: PCI bridge to [bus 02]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:09.0:   bridge window [io  
+0xe000-0xefff]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2600000-0xc27fffff]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2d00000-0xc2efffff 64bit pref]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:0a.0: PCI bridge to [bus 03]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [io  
+0xd000-0xdfff]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2400000-0xc25fffff]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2f00000-0xc30fffff 64bit pref]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:04:0c.0: PCI bridge to [bus 05]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [io  
+0xc000-0xcfff]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [mem 
+0xc2000000-0xc21fffff]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:08.0: PCI bridge to [bus 01]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:08.0:   bridge window [io  
+0xf000-0xffff]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2800000-0xc29fffff]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2b00000-0xc2cfffff 64bit pref]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:09.0: PCI bridge to [bus 02]
+Dec 11 09:28:17 uefi-linux kernel: pci 0000:00:09.0:   bridge window [io  
+0xe000-0xefff]
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2600000-0xc27fffff]
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2d00000-0xc2efffff 64bit pref]
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:00:0a.0: PCI bridge to [bus 03]
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [io  
+0xd000-0xdfff]
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2400000-0xc25fffff]
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2f00000-0xc30fffff 64bit pref]
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:04:0c.0: PCI bridge to [bus 05]
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [io  
+0xc000-0xcfff]
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [mem 
+0xc2000000-0xc21fffff]
+Dec 11 09:28:18 uefi-linux kernel: sleep
+Dec 11 09:28:18 uefi-linux kernel: CPU: 16 PID: 502 Comm: kworker/u40:1 Not 
+tainted 4.11.0-rc3+ #11
+Dec 11 09:28:18 uefi-linux kernel: Hardware name: QEMU Standard PC (i440FX + 
+PIIX, 1996), BIOS 0.0.0 02/06/2015
+Dec 11 09:28:18 uefi-linux kernel: Workqueue: kacpi_hotplug acpi_hotplug_work_fn
+Dec 11 09:28:18 uefi-linux kernel: Call Trace:
+Dec 11 09:28:18 uefi-linux kernel: dump_stack+0x63/0x87
+Dec 11 09:28:18 uefi-linux kernel: __pci_bus_size_bridges+0x931/0x960
+Dec 11 09:28:18 uefi-linux kernel: ? dev_printk+0x4d/0x50
+Dec 11 09:28:18 uefi-linux kernel: enable_slot+0x140/0x2f0
+Dec 11 09:28:18 uefi-linux kernel: ? __pm_runtime_resume+0x5c/0x80
+Dec 11 09:28:18 uefi-linux kernel: ? trim_stale_devices+0x9a/0x120
+Dec 11 09:28:18 uefi-linux kernel: acpiphp_check_bridge.part.6+0xf5/0x120
+Dec 11 09:28:18 uefi-linux kernel: acpiphp_hotplug_notify+0x145/0x1c0
+Dec 11 09:28:18 uefi-linux kernel: ? acpiphp_post_dock_fixup+0xc0/0xc0
+Dec 11 09:28:18 uefi-linux kernel: acpi_device_hotplug+0x3a6/0x3f3
+Dec 11 09:28:18 uefi-linux kernel: acpi_hotplug_work_fn+0x1e/0x29
+Dec 11 09:28:18 uefi-linux kernel: process_one_work+0x165/0x410
+Dec 11 09:28:18 uefi-linux kernel: worker_thread+0x137/0x4c0
+Dec 11 09:28:18 uefi-linux kernel: kthread+0x101/0x140
+Dec 11 09:28:18 uefi-linux kernel: ? rescuer_thread+0x380/0x380
+Dec 11 09:28:18 uefi-linux kernel: ? kthread_park+0x90/0x90
+Dec 11 09:28:18 uefi-linux kernel: ret_from_fork+0x2c/0x40
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:00:08.0: PCI bridge to [bus 01]
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:00:08.0:   bridge window [io  
+0xf000-0xffff]
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2800000-0xc29fffff]
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2b00000-0xc2cfffff 64bit pref]
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:00:09.0: PCI bridge to [bus 02]
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:00:09.0:   bridge window [io  
+0xe000-0xefff]
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2600000-0xc27fffff]
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2d00000-0xc2efffff 64bit pref]
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:00:0a.0: PCI bridge to [bus 03]
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [io  
+0xd000-0xdfff]
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2400000-0xc25fffff]
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2f00000-0xc30fffff 64bit pref]
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:04:0c.0: PCI bridge to [bus 05]
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [io  
+0xc000-0xcfff]
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [mem 
+0xc2000000-0xc21fffff]
+Dec 11 09:28:18 uefi-linux kernel: sleep
+Dec 11 09:28:18 uefi-linux kernel: CPU: 16 PID: 502 Comm: kworker/u40:1 Not 
+tainted 4.11.0-rc3+ #11
+Dec 11 09:28:18 uefi-linux kernel: Hardware name: QEMU Standard PC (i440FX + 
+PIIX, 1996), BIOS 0.0.0 02/06/2015
+Dec 11 09:28:18 uefi-linux kernel: Workqueue: kacpi_hotplug acpi_hotplug_work_fn
+Dec 11 09:28:18 uefi-linux kernel: Call Trace:
+Dec 11 09:28:18 uefi-linux kernel: dump_stack+0x63/0x87
+Dec 11 09:28:18 uefi-linux kernel: __pci_bus_size_bridges+0x931/0x960
+Dec 11 09:28:18 uefi-linux kernel: ? dev_printk+0x4d/0x50
+Dec 11 09:28:18 uefi-linux kernel: enable_slot+0x140/0x2f0
+Dec 11 09:28:18 uefi-linux kernel: ? __pm_runtime_resume+0x5c/0x80
+Dec 11 09:28:18 uefi-linux kernel: ? trim_stale_devices+0x9a/0x120
+Dec 11 09:28:18 uefi-linux kernel: acpiphp_check_bridge.part.6+0xf5/0x120
+Dec 11 09:28:18 uefi-linux kernel: acpiphp_hotplug_notify+0x145/0x1c0
+Dec 11 09:28:18 uefi-linux kernel: ? acpiphp_post_dock_fixup+0xc0/0xc0
+Dec 11 09:28:18 uefi-linux kernel: acpi_device_hotplug+0x3a6/0x3f3
+Dec 11 09:28:18 uefi-linux kernel: acpi_hotplug_work_fn+0x1e/0x29
+Dec 11 09:28:18 uefi-linux kernel: process_one_work+0x165/0x410
+Dec 11 09:28:18 uefi-linux kernel: worker_thread+0x137/0x4c0
+Dec 11 09:28:18 uefi-linux kernel: kthread+0x101/0x140
+Dec 11 09:28:18 uefi-linux kernel: ? rescuer_thread+0x380/0x380
+Dec 11 09:28:18 uefi-linux kernel: ? kthread_park+0x90/0x90
+Dec 11 09:28:18 uefi-linux kernel: ret_from_fork+0x2c/0x40
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:00:08.0: PCI bridge to [bus 01]
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:00:08.0:   bridge window [io  
+0xf000-0xffff]
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2800000-0xc29fffff]
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2b00000-0xc2cfffff 64bit pref]
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:00:09.0: PCI bridge to [bus 02]
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:00:09.0:   bridge window [io  
+0xe000-0xefff]
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2600000-0xc27fffff]
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2d00000-0xc2efffff 64bit pref]
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:00:0a.0: PCI bridge to [bus 03]
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [io  
+0xd000-0xdfff]
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2400000-0xc25fffff]
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2f00000-0xc30fffff 64bit pref]
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:04:0c.0: PCI bridge to [bus 05]
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [io  
+0xc000-0xcfff]
+Dec 11 09:28:18 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [mem 
+0xc2000000-0xc21fffff]
+Dec 11 09:28:19 uefi-linux kernel: sleep
+Dec 11 09:28:19 uefi-linux kernel: CPU: 17 PID: 502 Comm: kworker/u40:1 Not 
+tainted 4.11.0-rc3+ #11
+Dec 11 09:28:19 uefi-linux kernel: Hardware name: QEMU Standard PC (i440FX + 
+PIIX, 1996), BIOS 0.0.0 02/06/2015
+Dec 11 09:28:19 uefi-linux kernel: Workqueue: kacpi_hotplug acpi_hotplug_work_fn
+Dec 11 09:28:19 uefi-linux kernel: Call Trace:
+Dec 11 09:28:19 uefi-linux kernel: dump_stack+0x63/0x87
+Dec 11 09:28:19 uefi-linux kernel: __pci_bus_size_bridges+0x931/0x960
+Dec 11 09:28:19 uefi-linux kernel: ? dev_printk+0x4d/0x50
+Dec 11 09:28:19 uefi-linux kernel: enable_slot+0x140/0x2f0
+Dec 11 09:28:19 uefi-linux kernel: ? __pm_runtime_resume+0x5c/0x80
+Dec 11 09:28:19 uefi-linux kernel: ? trim_stale_devices+0x9a/0x120
+Dec 11 09:28:19 uefi-linux kernel: acpiphp_check_bridge.part.6+0xf5/0x120
+Dec 11 09:28:19 uefi-linux kernel: acpiphp_hotplug_notify+0x145/0x1c0
+Dec 11 09:28:19 uefi-linux kernel: ? acpiphp_post_dock_fixup+0xc0/0xc0
+Dec 11 09:28:19 uefi-linux kernel: acpi_device_hotplug+0x3a6/0x3f3
+Dec 11 09:28:19 uefi-linux kernel: acpi_hotplug_work_fn+0x1e/0x29
+Dec 11 09:28:19 uefi-linux kernel: process_one_work+0x165/0x410
+Dec 11 09:28:19 uefi-linux kernel: worker_thread+0x137/0x4c0
+Dec 11 09:28:19 uefi-linux kernel: kthread+0x101/0x140
+Dec 11 09:28:19 uefi-linux kernel: ? rescuer_thread+0x380/0x380
+Dec 11 09:28:19 uefi-linux kernel: ? kthread_park+0x90/0x90
+Dec 11 09:28:19 uefi-linux kernel: ret_from_fork+0x2c/0x40
+Dec 11 09:28:19 uefi-linux kernel: pci 0000:00:08.0: PCI bridge to [bus 01]
+Dec 11 09:28:19 uefi-linux kernel: pci 0000:00:08.0:   bridge window [io  
+0xf000-0xffff]
+Dec 11 09:28:19 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2800000-0xc29fffff]
+Dec 11 09:28:19 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2b00000-0xc2cfffff 64bit pref]
+Dec 11 09:28:19 uefi-linux kernel: pci 0000:00:09.0: PCI bridge to [bus 02]
+Dec 11 09:28:19 uefi-linux kernel: pci 0000:00:09.0:   bridge window [io  
+0xe000-0xefff]
+Dec 11 09:28:19 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2600000-0xc27fffff]
+Dec 11 09:28:19 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2d00000-0xc2efffff 64bit pref]
+Dec 11 09:28:19 uefi-linux kernel: pci 0000:00:0a.0: PCI bridge to [bus 03]
+Dec 11 09:28:19 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [io  
+0xd000-0xdfff]
+Dec 11 09:28:19 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2400000-0xc25fffff]
+Dec 11 09:28:19 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2f00000-0xc30fffff 64bit pref]
+Dec 11 09:28:19 uefi-linux kernel: pci 0000:04:0c.0: PCI bridge to [bus 05]
+Dec 11 09:28:19 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [io  
+0xc000-0xcfff]
+Dec 11 09:28:19 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [mem 
+0xc2000000-0xc21fffff]
+Dec 11 09:28:19 uefi-linux kernel: sleep
+Dec 11 09:28:19 uefi-linux kernel: CPU: 17 PID: 502 Comm: kworker/u40:1 Not 
+tainted 4.11.0-rc3+ #11
+Dec 11 09:28:19 uefi-linux kernel: Hardware name: QEMU Standard PC (i440FX + 
+PIIX, 1996), BIOS 0.0.0 02/06/2015
+Dec 11 09:28:19 uefi-linux kernel: Workqueue: kacpi_hotplug acpi_hotplug_work_fn
+Dec 11 09:28:19 uefi-linux kernel: Call Trace:
+Dec 11 09:28:19 uefi-linux kernel: dump_stack+0x63/0x87
+Dec 11 09:28:19 uefi-linux kernel: __pci_bus_size_bridges+0x931/0x960
+Dec 11 09:28:19 uefi-linux kernel: ? pci_conf1_read+0xba/0x100
+Dec 11 09:28:19 uefi-linux kernel: __pci_bus_size_bridges+0xe9/0x960
+Dec 11 09:28:19 uefi-linux kernel: ? dev_printk+0x4d/0x50
+Dec 11 09:28:19 uefi-linux kernel: ? pcibios_allocate_rom_resources+0x45/0x80
+Dec 11 09:28:19 uefi-linux kernel: enable_slot+0x140/0x2f0
+Dec 11 09:28:19 uefi-linux kernel: ? trim_stale_devices+0x9a/0x120
+Dec 11 09:28:19 uefi-linux kernel: ? __pm_runtime_resume+0x5c/0x80
+Dec 11 09:28:19 uefi-linux kernel: ? trim_stale_devices+0x9a/0x120
+Dec 11 09:28:19 uefi-linux kernel: acpiphp_check_bridge.part.6+0xf5/0x120
+Dec 11 09:28:19 uefi-linux kernel: acpiphp_hotplug_notify+0x145/0x1c0
+Dec 11 09:28:19 uefi-linux kernel: ? acpiphp_post_dock_fixup+0xc0/0xc0
+Dec 11 09:28:19 uefi-linux kernel: acpi_device_hotplug+0x3a6/0x3f3
+Dec 11 09:28:19 uefi-linux kernel: acpi_hotplug_work_fn+0x1e/0x29
+Dec 11 09:28:19 uefi-linux kernel: process_one_work+0x165/0x410
+Dec 11 09:28:19 uefi-linux kernel: worker_thread+0x137/0x4c0
+Dec 11 09:28:19 uefi-linux kernel: kthread+0x101/0x140
+Dec 11 09:28:19 uefi-linux kernel: ? rescuer_thread+0x380/0x380
+Dec 11 09:28:19 uefi-linux kernel: ? kthread_park+0x90/0x90
+Dec 11 09:28:19 uefi-linux kernel: ret_from_fork+0x2c/0x40
+Dec 11 09:28:19 uefi-linux kernel: sleep
+Dec 11 09:28:19 uefi-linux kernel: CPU: 17 PID: 502 Comm: kworker/u40:1 Not 
+tainted 4.11.0-rc3+ #11
+Dec 11 09:28:19 uefi-linux kernel: Hardware name: QEMU Standard PC (i440FX + 
+PIIX, 1996), BIOS 0.0.0 02/06/2015
+Dec 11 09:28:19 uefi-linux kernel: Workqueue: kacpi_hotplug acpi_hotplug_work_fn
+Dec 11 09:28:19 uefi-linux kernel: Call Trace:
+Dec 11 09:28:19 uefi-linux kernel: dump_stack+0x63/0x87
+Dec 11 09:28:19 uefi-linux kernel: __pci_bus_size_bridges+0x931/0x960
+Dec 11 09:28:19 uefi-linux kernel: ? dev_printk+0x4d/0x50
+Dec 11 09:28:19 uefi-linux kernel: enable_slot+0x140/0x2f0
+Dec 11 09:28:19 uefi-linux kernel: ? trim_stale_devices+0x9a/0x120
+Dec 11 09:28:19 uefi-linux kernel: ? __pm_runtime_resume+0x5c/0x80
+Dec 11 09:28:19 uefi-linux kernel: ? trim_stale_devices+0x9a/0x120
+Dec 11 09:28:19 uefi-linux kernel: acpiphp_check_bridge.part.6+0xf5/0x120
+Dec 11 09:28:19 uefi-linux kernel: acpiphp_hotplug_notify+0x145/0x1c0
+Dec 11 09:28:19 uefi-linux kernel: ? acpiphp_post_dock_fixup+0xc0/0xc0
+Dec 11 09:28:19 uefi-linux kernel: acpi_device_hotplug+0x3a6/0x3f3
+Dec 11 09:28:19 uefi-linux kernel: acpi_hotplug_work_fn+0x1e/0x29
+Dec 11 09:28:19 uefi-linux kernel: process_one_work+0x165/0x410
+Dec 11 09:28:19 uefi-linux kernel: worker_thread+0x137/0x4c0
+Dec 11 09:28:19 uefi-linux kernel: kthread+0x101/0x140
+Dec 11 09:28:19 uefi-linux kernel: ? rescuer_thread+0x380/0x380
+Dec 11 09:28:19 uefi-linux kernel: ? kthread_park+0x90/0x90
+Dec 11 09:28:19 uefi-linux kernel: ret_from_fork+0x2c/0x40
+Dec 11 09:28:19 uefi-linux kernel: pci 0000:00:08.0: PCI bridge to [bus 01]
+Dec 11 09:28:19 uefi-linux kernel: pci 0000:00:08.0:   bridge window [io  
+0xf000-0xffff]
+Dec 11 09:28:19 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2800000-0xc29fffff]
+Dec 11 09:28:19 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2b00000-0xc2cfffff 64bit pref]
+Dec 11 09:28:19 uefi-linux kernel: pci 0000:00:09.0: PCI bridge to [bus 02]
+Dec 11 09:28:19 uefi-linux kernel: pci 0000:00:09.0:   bridge window [io  
+0xe000-0xefff]
+Dec 11 09:28:19 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2600000-0xc27fffff]
+Dec 11 09:28:19 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2d00000-0xc2efffff 64bit pref]
+Dec 11 09:28:19 uefi-linux kernel: pci 0000:00:0a.0: PCI bridge to [bus 03]
+Dec 11 09:28:19 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [io  
+0xd000-0xdfff]
+Dec 11 09:28:19 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2400000-0xc25fffff]
+Dec 11 09:28:19 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2f00000-0xc30fffff 64bit pref]
+Dec 11 09:28:19 uefi-linux kernel: pci 0000:04:0c.0: PCI bridge to [bus 05]
+Dec 11 09:28:19 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [io  
+0xc000-0xcfff]
+Dec 11 09:28:19 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [mem 
+0xc2000000-0xc21fffff]
+Dec 11 09:28:19 uefi-linux kernel: pci 0000:00:08.0: PCI bridge to [bus 01]
+Dec 11 09:28:19 uefi-linux kernel: pci 0000:00:08.0:   bridge window [io  
+0xf000-0xffff]
+Dec 11 09:28:19 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2800000-0xc29fffff]
+Dec 11 09:28:19 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2b00000-0xc2cfffff 64bit pref]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:09.0: PCI bridge to [bus 02]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:09.0:   bridge window [io  
+0xe000-0xefff]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2600000-0xc27fffff]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2d00000-0xc2efffff 64bit pref]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:0a.0: PCI bridge to [bus 03]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [io  
+0xd000-0xdfff]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2400000-0xc25fffff]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2f00000-0xc30fffff 64bit pref]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:04:0c.0: PCI bridge to [bus 05]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [io  
+0xc000-0xcfff]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [mem 
+0xc2000000-0xc21fffff]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:08.0: PCI bridge to [bus 01]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:08.0:   bridge window [io  
+0xf000-0xffff]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2800000-0xc29fffff]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2b00000-0xc2cfffff 64bit pref]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:09.0: PCI bridge to [bus 02]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:09.0:   bridge window [io  
+0xe000-0xefff]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2600000-0xc27fffff]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2d00000-0xc2efffff 64bit pref]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:0a.0: PCI bridge to [bus 03]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [io  
+0xd000-0xdfff]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2400000-0xc25fffff]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2f00000-0xc30fffff 64bit pref]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:04:0c.0: PCI bridge to [bus 05]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [io  
+0xc000-0xcfff]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [mem 
+0xc2000000-0xc21fffff]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:08.0: PCI bridge to [bus 01]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:08.0:   bridge window [io  
+0xf000-0xffff]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2800000-0xc29fffff]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2b00000-0xc2cfffff 64bit pref]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:09.0: PCI bridge to [bus 02]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:09.0:   bridge window [io  
+0xe000-0xefff]
+Dec 11 09:28:20 uefi-linux kernel: psmouse serio1: VMMouse at 
+isa0060/serio1/input0 lost sync at byte 1
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2600000-0xc27fffff]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2d00000-0xc2efffff 64bit pref]
+Dec 11 09:28:20 uefi-linux kernel: psmouse serio1: VMMouse at 
+isa0060/serio1/input0 - driver resynced.
+Dec 11 09:28:20 uefi-linux kernel: psmouse serio1: VMMouse at 
+isa0060/serio1/input0 lost sync at byte 1
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:0a.0: PCI bridge to [bus 03]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [io  
+0xd000-0xdfff]
+Dec 11 09:28:20 uefi-linux kernel: psmouse serio1: VMMouse at 
+isa0060/serio1/input0 - driver resynced.
+Dec 11 09:28:20 uefi-linux kernel: psmouse serio1: VMMouse at 
+isa0060/serio1/input0 lost sync at byte 1
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2400000-0xc25fffff]
+Dec 11 09:28:20 uefi-linux kernel: psmouse serio1: VMMouse at 
+isa0060/serio1/input0 - driver resynced.
+Dec 11 09:28:20 uefi-linux kernel: psmouse serio1: VMMouse at 
+isa0060/serio1/input0 lost sync at byte 1
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2f00000-0xc30fffff 64bit pref]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:04:0c.0: PCI bridge to [bus 05]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [io  
+0xc000-0xcfff]
+Dec 11 09:28:20 uefi-linux kernel: psmouse serio1: VMMouse at 
+isa0060/serio1/input0 - driver resynced.
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [mem 
+0xc2000000-0xc21fffff]
+Dec 11 09:28:20 uefi-linux kernel: psmouse serio1: VMMouse at 
+isa0060/serio1/input0 lost sync at byte 1
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:08.0: PCI bridge to [bus 01]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:08.0:   bridge window [io  
+0xf000-0xffff]
+Dec 11 09:28:20 uefi-linux kernel: psmouse serio1: VMMouse at 
+isa0060/serio1/input0 - driver resynced.
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2800000-0xc29fffff]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2b00000-0xc2cfffff 64bit pref]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:09.0: PCI bridge to [bus 02]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:09.0:   bridge window [io  
+0xe000-0xefff]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2600000-0xc27fffff]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2d00000-0xc2efffff 64bit pref]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:0a.0: PCI bridge to [bus 03]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [io  
+0xd000-0xdfff]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2400000-0xc25fffff]
+Dec 11 09:28:20 uefi-linux kernel: psmouse serio1: VMMouse at 
+isa0060/serio1/input0 lost sync at byte 1
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2f00000-0xc30fffff 64bit pref]
+Dec 11 09:28:20 uefi-linux kernel: psmouse serio1: VMMouse at 
+isa0060/serio1/input0 - driver resynced.
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:04:0c.0: PCI bridge to [bus 05]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [io  
+0xc000-0xcfff]
+Dec 11 09:28:20 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [mem 
+0xc2000000-0xc21fffff]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:08.0: PCI bridge to [bus 01]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:08.0:   bridge window [io  
+0xf000-0xffff]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2800000-0xc29fffff]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2b00000-0xc2cfffff 64bit pref]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:09.0: PCI bridge to [bus 02]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:09.0:   bridge window [io  
+0xe000-0xefff]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2600000-0xc27fffff]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2d00000-0xc2efffff 64bit pref]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:0a.0: PCI bridge to [bus 03]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [io  
+0xd000-0xdfff]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2400000-0xc25fffff]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2f00000-0xc30fffff 64bit pref]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:04:0c.0: PCI bridge to [bus 05]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [io  
+0xc000-0xcfff]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [mem 
+0xc2000000-0xc21fffff]
+Dec 11 09:28:21 uefi-linux kernel: psmouse serio1: VMMouse at 
+isa0060/serio1/input0 lost sync at byte 1
+Dec 11 09:28:21 uefi-linux kernel: psmouse serio1: VMMouse at 
+isa0060/serio1/input0 - driver resynced.
+Dec 11 09:28:21 uefi-linux kernel: psmouse serio1: VMMouse at 
+isa0060/serio1/input0 lost sync at byte 1
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:08.0: PCI bridge to [bus 01]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:08.0:   bridge window [io  
+0xf000-0xffff]
+Dec 11 09:28:21 uefi-linux kernel: psmouse serio1: VMMouse at 
+isa0060/serio1/input0 - driver resynced.
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2800000-0xc29fffff]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2b00000-0xc2cfffff 64bit pref]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:09.0: PCI bridge to [bus 02]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:09.0:   bridge window [io  
+0xe000-0xefff]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2600000-0xc27fffff]
+Dec 11 09:28:21 uefi-linux kernel: psmouse serio1: VMMouse at 
+isa0060/serio1/input0 lost sync at byte 1
+Dec 11 09:28:21 uefi-linux kernel: psmouse serio1: VMMouse at 
+isa0060/serio1/input0 lost sync at byte 1
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2d00000-0xc2efffff 64bit pref]
+Dec 11 09:28:21 uefi-linux kernel: psmouse serio1: VMMouse at 
+isa0060/serio1/input0 - driver resynced.
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:0a.0: PCI bridge to [bus 03]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [io  
+0xd000-0xdfff]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2400000-0xc25fffff]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2f00000-0xc30fffff 64bit pref]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:04:0c.0: PCI bridge to [bus 05]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [io  
+0xc000-0xcfff]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [mem 
+0xc2000000-0xc21fffff]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:08.0: PCI bridge to [bus 01]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:08.0:   bridge window [io  
+0xf000-0xffff]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2800000-0xc29fffff]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2b00000-0xc2cfffff 64bit pref]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:09.0: PCI bridge to [bus 02]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:09.0:   bridge window [io  
+0xe000-0xefff]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2600000-0xc27fffff]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2d00000-0xc2efffff 64bit pref]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:0a.0: PCI bridge to [bus 03]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [io  
+0xd000-0xdfff]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2400000-0xc25fffff]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2f00000-0xc30fffff 64bit pref]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:04:0c.0: PCI bridge to [bus 05]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [io  
+0xc000-0xcfff]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [mem 
+0xc2000000-0xc21fffff]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:08.0: PCI bridge to [bus 01]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:08.0:   bridge window [io  
+0xf000-0xffff]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2800000-0xc29fffff]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2b00000-0xc2cfffff 64bit pref]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:09.0: PCI bridge to [bus 02]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:09.0:   bridge window [io  
+0xe000-0xefff]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2600000-0xc27fffff]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2d00000-0xc2efffff 64bit pref]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:0a.0: PCI bridge to [bus 03]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [io  
+0xd000-0xdfff]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2400000-0xc25fffff]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2f00000-0xc30fffff 64bit pref]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:04:0c.0: PCI bridge to [bus 05]
+Dec 11 09:28:21 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [io  
+0xc000-0xcfff]
+Dec 11 09:28:22 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [mem 
+0xc2000000-0xc21fffff]
+Dec 11 09:28:22 uefi-linux kernel: pci 0000:00:08.0: PCI bridge to [bus 01]
+Dec 11 09:28:22 uefi-linux kernel: pci 0000:00:08.0:   bridge window [io  
+0xf000-0xffff]
+Dec 11 09:28:22 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2800000-0xc29fffff]
+Dec 11 09:28:22 uefi-linux kernel: pci 0000:00:08.0:   bridge window [mem 
+0xc2b00000-0xc2cfffff 64bit pref]
+Dec 11 09:28:22 uefi-linux kernel: pci 0000:00:09.0: PCI bridge to [bus 02]
+Dec 11 09:28:22 uefi-linux kernel: pci 0000:00:09.0:   bridge window [io  
+0xe000-0xefff]
+Dec 11 09:28:22 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2600000-0xc27fffff]
+Dec 11 09:28:22 uefi-linux kernel: pci 0000:00:09.0:   bridge window [mem 
+0xc2d00000-0xc2efffff 64bit pref]
+Dec 11 09:28:22 uefi-linux kernel: pci 0000:00:0a.0: PCI bridge to [bus 03]
+Dec 11 09:28:22 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [io  
+0xd000-0xdfff]
+Dec 11 09:28:22 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2400000-0xc25fffff]
+Dec 11 09:28:22 uefi-linux kernel: pci 0000:00:0a.0:   bridge window [mem 
+0xc2f00000-0xc30fffff 64bit pref]
+Dec 11 09:28:22 uefi-linux kernel: pci 0000:04:0c.0: PCI bridge to [bus 05]
+Dec 11 09:28:22 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [io  
+0xc000-0xcfff]
+Dec 11 09:28:22 uefi-linux kernel: pci 0000:04:0c.0:   bridge window [mem 
+0xc2000000-0xc21fffff]
+
+>
+>
+>
+> >
+>
+> >
+>
+> >
+>
+> > >
+>
+> > >
+>
+> > > diff --git a/hw/pci/pci_bridge.c b/hw/pci/pci_bridge.c
+>
+> > >
+>
+> > > index b2e50c3..84b405d 100644
+>
+> > >
+>
+> > > --- a/hw/pci/pci_bridge.c
+>
+> > >
+>
+> > > +++ b/hw/pci/pci_bridge.c
+>
+> > >
+>
+> > > @@ -256,7 +256,8 @@ void pci_bridge_write_config(PCIDevice *d,
+>
+> > >
+>
+> > >      pci_default_write_config(d, address, val, len);
+>
+> > >
+>
+> > > -    if (ranges_overlap(address, len, PCI_COMMAND, 2) ||
+>
+> > >
+>
+> > > +    if ( (val != 0xffffffff) &&
+>
+> > >
+>
+> > > +        (ranges_overlap(address, len, PCI_COMMAND, 2) ||
+>
+> > >
+>
+> > >          /* io base/limit */
+>
+> > >
+>
+> > >          ranges_overlap(address, len, PCI_IO_BASE, 2) ||
+>
+> > >
+>
+> > > @@ -266,7 +267,7 @@ void pci_bridge_write_config(PCIDevice *d,
+>
+> > >
+>
+> > >          ranges_overlap(address, len, PCI_MEMORY_BASE, 20) ||
+>
+> > >
+>
+> > >          /* vga enable */
+>
+> > >
+>
+> > > -        ranges_overlap(address, len, PCI_BRIDGE_CONTROL, 2)) {
+>
+> > >
+>
+> > > +        ranges_overlap(address, len, PCI_BRIDGE_CONTROL, 2))) {
+>
+> > >
+>
+> > >          pci_bridge_update_mappings(s);
+>
+> > >
+>
+> > >      }
+>
+> > >
+>
+> > >
+>
+> > >
+>
+> > > Thinks,
+>
+> > >
+>
+> > > Xu
+>
+> > >
+
+On Tue, Dec 11, 2018 at 01:47:37AM +0000, xuyandong wrote:
+>
+On Sat, Dec 08, 2018 at 11:58:59AM +0000, xuyandong wrote:
+>
+> > > > Hi all,
+>
+> > > >
+>
+> > > >
+>
+> > > >
+>
+> > > > In our test, we configured VM with several pci-bridges and a
+>
+> > > > virtio-net nic been attached with bus 4,
+>
+> > > >
+>
+> > > > After VM is startup, We ping this nic from host to judge if it is
+>
+> > > > working normally. Then, we hot add pci devices to this VM with bus 0.
+>
+> > > >
+>
+> > > > We  found the virtio-net NIC in bus 4 is not working (can not
+>
+> > > > connect) occasionally, as it kick virtio backend failure with error
+>
+> > > > below:
+>
+> > > >
+>
+> > > >     Unassigned mem write 00000000fc803004 = 0x1
+>
+> > > >
+>
+> > > >
+>
+> > > >
+>
+> > > > memory-region: pci_bridge_pci
+>
+> > > >
+>
+> > > >   0000000000000000-ffffffffffffffff (prio 0, RW): pci_bridge_pci
+>
+> > > >
+>
+> > > >     00000000fc800000-00000000fc803fff (prio 1, RW): virtio-pci
+>
+> > > >
+>
+> > > >       00000000fc800000-00000000fc800fff (prio 0, RW):
+>
+> > > > virtio-pci-common
+>
+> > > >
+>
+> > > >       00000000fc801000-00000000fc801fff (prio 0, RW):
+>
+> > > > virtio-pci-isr
+>
+> > > >
+>
+> > > >       00000000fc802000-00000000fc802fff (prio 0, RW):
+>
+> > > > virtio-pci-device
+>
+> > > >
+>
+> > > >       00000000fc803000-00000000fc803fff (prio 0, RW):
+>
+> > > > virtio-pci-notify  <- io mem unassigned
+>
+> > > >
+>
+> > > >       …
+>
+> > > >
+>
+> > > >
+>
+> > > >
+>
+> > > > We caught an exceptional address changing while this problem
+>
+> > > > happened, show as
+>
+> > > > follow:
+>
+> > > >
+>
+> > > > Before pci_bridge_update_mappings:
+>
+> > > >
+>
+> > > >       00000000fc000000-00000000fc1fffff (prio 1, RW): alias
+>
+> > > > pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > 00000000fc000000-00000000fc1fffff
+>
+> > > >
+>
+> > > >       00000000fc200000-00000000fc3fffff (prio 1, RW): alias
+>
+> > > > pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > 00000000fc200000-00000000fc3fffff
+>
+> > > >
+>
+> > > >       00000000fc400000-00000000fc5fffff (prio 1, RW): alias
+>
+> > > > pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > 00000000fc400000-00000000fc5fffff
+>
+> > > >
+>
+> > > >       00000000fc600000-00000000fc7fffff (prio 1, RW): alias
+>
+> > > > pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > 00000000fc600000-00000000fc7fffff
+>
+> > > >
+>
+> > > >       00000000fc800000-00000000fc9fffff (prio 1, RW): alias
+>
+> > > > pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > 00000000fc800000-00000000fc9fffff
+>
+> > > > <- correct Adress Spce
+>
+> > > >
+>
+> > > >       00000000fca00000-00000000fcbfffff (prio 1, RW): alias
+>
+> > > > pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > 00000000fca00000-00000000fcbfffff
+>
+> > > >
+>
+> > > >       00000000fcc00000-00000000fcdfffff (prio 1, RW): alias
+>
+> > > > pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > 00000000fcc00000-00000000fcdfffff
+>
+> > > >
+>
+> > > >       00000000fce00000-00000000fcffffff (prio 1, RW): alias
+>
+> > > > pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > 00000000fce00000-00000000fcffffff
+>
+> > > >
+>
+> > > >
+>
+> > > >
+>
+> > > > After pci_bridge_update_mappings:
+>
+> > > >
+>
+> > > >       00000000fda00000-00000000fdbfffff (prio 1, RW): alias
+>
+> > > > pci_bridge_mem @pci_bridge_pci 00000000fda00000-00000000fdbfffff
+>
+> > > >
+>
+> > > >       00000000fdc00000-00000000fddfffff (prio 1, RW): alias
+>
+> > > > pci_bridge_mem @pci_bridge_pci 00000000fdc00000-00000000fddfffff
+>
+> > > >
+>
+> > > >       00000000fde00000-00000000fdffffff (prio 1, RW): alias
+>
+> > > > pci_bridge_mem @pci_bridge_pci 00000000fde00000-00000000fdffffff
+>
+> > > >
+>
+> > > >       00000000fe000000-00000000fe1fffff (prio 1, RW): alias
+>
+> > > > pci_bridge_mem @pci_bridge_pci 00000000fe000000-00000000fe1fffff
+>
+> > > >
+>
+> > > >       00000000fe200000-00000000fe3fffff (prio 1, RW): alias
+>
+> > > > pci_bridge_mem @pci_bridge_pci 00000000fe200000-00000000fe3fffff
+>
+> > > >
+>
+> > > >       00000000fe400000-00000000fe5fffff (prio 1, RW): alias
+>
+> > > > pci_bridge_mem @pci_bridge_pci 00000000fe400000-00000000fe5fffff
+>
+> > > >
+>
+> > > >       00000000fe600000-00000000fe7fffff (prio 1, RW): alias
+>
+> > > > pci_bridge_mem @pci_bridge_pci 00000000fe600000-00000000fe7fffff
+>
+> > > >
+>
+> > > >       00000000fe800000-00000000fe9fffff (prio 1, RW): alias
+>
+> > > > pci_bridge_mem @pci_bridge_pci 00000000fe800000-00000000fe9fffff
+>
+> > > >
+>
+> > > >       fffffffffc800000-fffffffffc800000 (prio 1, RW): alias
+>
+> pci_bridge_pref_mem
+>
+> > > > @pci_bridge_pci fffffffffc800000-fffffffffc800000   <- Exceptional
+>
+> > > > Adress
+>
+> > > Space
+>
+> > >
+>
+> > > This one is empty though right?
+>
+> > >
+>
+> > > >
+>
+> > > >
+>
+> > > > We have figured out why this address becomes this value,
+>
+> > > > according to pci spec,  pci driver can get BAR address size by
+>
+> > > > writing 0xffffffff to
+>
+> > > >
+>
+> > > > the pci register firstly, and then read back the value from this
+>
+> > > > register.
+>
+> > >
+>
+> > >
+>
+> > > OK however as you show below the BAR being sized is the BAR if a
+>
+> > > bridge. Are you then adding a bridge device by hotplug?
+>
+> >
+>
+> > No, I just simply hot plugged a VFIO device to Bus 0, another
+>
+> > interesting phenomenon is If I hot plug the device to other bus, this
+>
+> > doesn't
+>
+> happened.
+>
+> >
+>
+> > >
+>
+> > >
+>
+> > > > We didn't handle this value  specially while process pci write in
+>
+> > > > qemu, the function call stack is:
+>
+> > > >
+>
+> > > > Pci_bridge_dev_write_config
+>
+> > > >
+>
+> > > > -> pci_bridge_write_config
+>
+> > > >
+>
+> > > > -> pci_default_write_config (we update the config[address] value
+>
+> > > > -> here to
+>
+> > > > fffffffffc800000, which should be 0xfc800000 )
+>
+> > > >
+>
+> > > > -> pci_bridge_update_mappings
+>
+> > > >
+>
+> > > >                 ->pci_bridge_region_del(br, br->windows);
+>
+> > > >
+>
+> > > > -> pci_bridge_region_init
+>
+> > > >
+>
+> > > >                                                                 ->
+>
+> > > > pci_bridge_init_alias (here pci_bridge_get_base, we use the wrong
+>
+> > > > value
+>
+> > > > fffffffffc800000)
+>
+> > > >
+>
+> > > >                                                 ->
+>
+> > > > memory_region_transaction_commit
+>
+> > > >
+>
+> > > >
+>
+> > > >
+>
+> > > > So, as we can see, we use the wrong base address in qemu to update
+>
+> > > > the memory regions, though, we update the base address to
+>
+> > > >
+>
+> > > > The correct value after pci driver in VM write the original value
+>
+> > > > back, the virtio NIC in bus 4 may still sends net packets
+>
+> > > > concurrently with
+>
+> > > >
+>
+> > > > The wrong memory region address.
+>
+> > > >
+>
+> > > >
+>
+> > > >
+>
+> > > > We have tried to skip the memory region update action in qemu
+>
+> > > > while detect pci write with 0xffffffff value, and it does work,
+>
+> > > > but
+>
+> > > >
+>
+> > > > This seems to be not gently.
+>
+> > >
+>
+> > > For sure. But I'm still puzzled as to why does Linux try to size the
+>
+> > > BAR of the bridge while a device behind it is used.
+>
+> > >
+>
+> > > Can you pls post your QEMU command line?
+>
+> >
+>
+> > My QEMU command line:
+>
+> > /root/xyd/qemu-system-x86_64 -name guest=Linux,debug-threads=on -S
+>
+> > -object
+>
+> > secret,id=masterKey0,format=raw,file=/var/run/libvirt/qemu/domain-194-
+>
+> > Linux/master-key.aes -machine
+>
+> > pc-i440fx-2.8,accel=kvm,usb=off,dump-guest-core=off -cpu
+>
+> > host,+kvm_pv_eoi -bios /usr/share/OVMF/OVMF.fd -m
+>
+> > size=4194304k,slots=256,maxmem=33554432k -realtime mlock=off -smp
+>
+> > 20,sockets=20,cores=1,threads=1 -numa node,nodeid=0,cpus=0-4,mem=1024
+>
+> > -numa node,nodeid=1,cpus=5-9,mem=1024 -numa
+>
+> > node,nodeid=2,cpus=10-14,mem=1024 -numa
+>
+> > node,nodeid=3,cpus=15-19,mem=1024 -uuid
+>
+> > 34a588c7-b0f2-4952-b39c-47fae3411439 -no-user-config -nodefaults
+>
+> > -chardev
+>
+> > socket,id=charmonitor,path=/var/run/libvirt/qemu/domain-194-Linux/moni
+>
+> > tor.sock,server,nowait -mon
+>
+> > chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-hpet
+>
+> > -global kvm-pit.lost_tick_policy=delay -no-shutdown -boot strict=on
+>
+> > -device pci-bridge,chassis_nr=1,id=pci.1,bus=pci.0,addr=0x8 -device
+>
+> > pci-bridge,chassis_nr=2,id=pci.2,bus=pci.0,addr=0x9 -device
+>
+> > pci-bridge,chassis_nr=3,id=pci.3,bus=pci.0,addr=0xa -device
+>
+> > pci-bridge,chassis_nr=4,id=pci.4,bus=pci.0,addr=0xb -device
+>
+> > pci-bridge,chassis_nr=5,id=pci.5,bus=pci.0,addr=0xc -device
+>
+> > piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
+>
+> > usb-ehci,id=usb1,bus=pci.0,addr=0x10 -device
+>
+> > nec-usb-xhci,id=usb2,bus=pci.0,addr=0x11 -device
+>
+> > virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x3 -device
+>
+> > virtio-scsi-pci,id=scsi1,bus=pci.0,addr=0x4 -device
+>
+> > virtio-scsi-pci,id=scsi2,bus=pci.0,addr=0x5 -device
+>
+> > virtio-scsi-pci,id=scsi3,bus=pci.0,addr=0x6 -device
+>
+> > virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x7 -drive
+>
+> > file=/mnt/sdb/xml/centos_74_x64_uefi.raw,format=raw,if=none,id=drive-v
+>
+> > irtio-disk0,cache=none -device
+>
+> > virtio-blk-pci,scsi=off,bus=pci.0,addr=0x2,drive=drive-virtio-disk0,id
+>
+> > =virtio-disk0,bootindex=1 -drive
+>
+> > if=none,id=drive-ide0-1-1,readonly=on,cache=none -device
+>
+> > ide-cd,bus=ide.1,unit=1,drive=drive-ide0-1-1,id=ide0-1-1 -netdev
+>
+> > tap,fd=35,id=hostnet0 -device
+>
+> > virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:89:5d:8b,bus=pci.4
+>
+> > ,addr=0x1 -chardev pty,id=charserial0 -device
+>
+> > isa-serial,chardev=charserial0,id=serial0 -device
+>
+> > usb-tablet,id=input0,bus=usb.0,port=1 -vnc 0.0.0.0:0 -device
+>
+> > cirrus-vga,id=video0,vgamem_mb=8,bus=pci.0,addr=0x12 -device
+>
+> > virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xd -msg timestamp=on
+>
+> >
+>
+> > I am also very curious about this issue, in the linux kernel code, maybe
+>
+> > double
+>
+> check in function pci_bridge_check_ranges triggered this problem.
+>
+>
+>
+> If you can get the stacktrace in Linux when it tries to write this fffff
+>
+> value, that
+>
+> would be quite helpful.
+>
+>
+>
+>
+After I add mdelay(100) in function pci_bridge_check_ranges, this phenomenon
+>
+is
+>
+easier to reproduce, below is my modify in kernel:
+>
+diff --git a/drivers/pci/setup-bus.c b/drivers/pci/setup-bus.c
+>
+index cb389277..86e232d 100644
+>
+--- a/drivers/pci/setup-bus.c
+>
++++ b/drivers/pci/setup-bus.c
+>
+@@ -27,7 +27,7 @@
+>
+#include <linux/slab.h>
+>
+#include <linux/acpi.h>
+>
+#include "pci.h"
+>
+-
+>
++#include <linux/delay.h>
+>
+unsigned int pci_flags;
+>
+>
+struct pci_dev_resource {
+>
+@@ -787,6 +787,9 @@ static void pci_bridge_check_ranges(struct pci_bus *bus)
+>
+pci_write_config_dword(bridge, PCI_PREF_BASE_UPPER32,
+>
+0xffffffff);
+>
+pci_read_config_dword(bridge, PCI_PREF_BASE_UPPER32, &tmp);
+>
++               mdelay(100);
+>
++               printk(KERN_ERR "sleep\n");
+>
++                dump_stack();
+>
+if (!tmp)
+>
+b_res[2].flags &= ~IORESOURCE_MEM_64;
+>
+pci_write_config_dword(bridge, PCI_PREF_BASE_UPPER32,
+>
+OK!
+I just sent a Linux patch that should help.
+I would appreciate it if you will give it a try
+and if that helps reply to it with
+a Tested-by: tag.
+
+-- 
+MST
+
+On Tue, Dec 11, 2018 at 01:47:37AM +0000, xuyandong wrote:
+>
+> On Sat, Dec 08, 2018 at 11:58:59AM +0000, xuyandong wrote:
+>
+> > > > > Hi all,
+>
+> > > > >
+>
+> > > > >
+>
+> > > > >
+>
+> > > > > In our test, we configured VM with several pci-bridges and a
+>
+> > > > > virtio-net nic been attached with bus 4,
+>
+> > > > >
+>
+> > > > > After VM is startup, We ping this nic from host to judge if it
+>
+> > > > > is working normally. Then, we hot add pci devices to this VM with
+>
+> > > > > bus
+>
+0.
+>
+> > > > >
+>
+> > > > > We  found the virtio-net NIC in bus 4 is not working (can not
+>
+> > > > > connect) occasionally, as it kick virtio backend failure with error
+>
+> > > > > below:
+>
+> > > > >
+>
+> > > > >     Unassigned mem write 00000000fc803004 = 0x1
+>
+> > > > >
+>
+> > > > >
+>
+> > > > >
+>
+> > > > > memory-region: pci_bridge_pci
+>
+> > > > >
+>
+> > > > >   0000000000000000-ffffffffffffffff (prio 0, RW):
+>
+> > > > > pci_bridge_pci
+>
+> > > > >
+>
+> > > > >     00000000fc800000-00000000fc803fff (prio 1, RW): virtio-pci
+>
+> > > > >
+>
+> > > > >       00000000fc800000-00000000fc800fff (prio 0, RW):
+>
+> > > > > virtio-pci-common
+>
+> > > > >
+>
+> > > > >       00000000fc801000-00000000fc801fff (prio 0, RW):
+>
+> > > > > virtio-pci-isr
+>
+> > > > >
+>
+> > > > >       00000000fc802000-00000000fc802fff (prio 0, RW):
+>
+> > > > > virtio-pci-device
+>
+> > > > >
+>
+> > > > >       00000000fc803000-00000000fc803fff (prio 0, RW):
+>
+> > > > > virtio-pci-notify  <- io mem unassigned
+>
+> > > > >
+>
+> > > > >       …
+>
+> > > > >
+>
+> > > > >
+>
+> > > > >
+>
+> > > > > We caught an exceptional address changing while this problem
+>
+> > > > > happened, show as
+>
+> > > > > follow:
+>
+> > > > >
+>
+> > > > > Before pci_bridge_update_mappings:
+>
+> > > > >
+>
+> > > > >       00000000fc000000-00000000fc1fffff (prio 1, RW): alias
+>
+> > > > > pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > > 00000000fc000000-00000000fc1fffff
+>
+> > > > >
+>
+> > > > >       00000000fc200000-00000000fc3fffff (prio 1, RW): alias
+>
+> > > > > pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > > 00000000fc200000-00000000fc3fffff
+>
+> > > > >
+>
+> > > > >       00000000fc400000-00000000fc5fffff (prio 1, RW): alias
+>
+> > > > > pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > > 00000000fc400000-00000000fc5fffff
+>
+> > > > >
+>
+> > > > >       00000000fc600000-00000000fc7fffff (prio 1, RW): alias
+>
+> > > > > pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > > 00000000fc600000-00000000fc7fffff
+>
+> > > > >
+>
+> > > > >       00000000fc800000-00000000fc9fffff (prio 1, RW): alias
+>
+> > > > > pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > > 00000000fc800000-00000000fc9fffff
+>
+> > > > > <- correct Adress Spce
+>
+> > > > >
+>
+> > > > >       00000000fca00000-00000000fcbfffff (prio 1, RW): alias
+>
+> > > > > pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > > 00000000fca00000-00000000fcbfffff
+>
+> > > > >
+>
+> > > > >       00000000fcc00000-00000000fcdfffff (prio 1, RW): alias
+>
+> > > > > pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > > 00000000fcc00000-00000000fcdfffff
+>
+> > > > >
+>
+> > > > >       00000000fce00000-00000000fcffffff (prio 1, RW): alias
+>
+> > > > > pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > > 00000000fce00000-00000000fcffffff
+>
+> > > > >
+>
+> > > > >
+>
+> > > > >
+>
+> > > > > After pci_bridge_update_mappings:
+>
+> > > > >
+>
+> > > > >       00000000fda00000-00000000fdbfffff (prio 1, RW): alias
+>
+> > > > > pci_bridge_mem @pci_bridge_pci
+>
+> > > > > 00000000fda00000-00000000fdbfffff
+>
+> > > > >
+>
+> > > > >       00000000fdc00000-00000000fddfffff (prio 1, RW): alias
+>
+> > > > > pci_bridge_mem @pci_bridge_pci
+>
+> > > > > 00000000fdc00000-00000000fddfffff
+>
+> > > > >
+>
+> > > > >       00000000fde00000-00000000fdffffff (prio 1, RW): alias
+>
+> > > > > pci_bridge_mem @pci_bridge_pci
+>
+> > > > > 00000000fde00000-00000000fdffffff
+>
+> > > > >
+>
+> > > > >       00000000fe000000-00000000fe1fffff (prio 1, RW): alias
+>
+> > > > > pci_bridge_mem @pci_bridge_pci
+>
+> > > > > 00000000fe000000-00000000fe1fffff
+>
+> > > > >
+>
+> > > > >       00000000fe200000-00000000fe3fffff (prio 1, RW): alias
+>
+> > > > > pci_bridge_mem @pci_bridge_pci
+>
+> > > > > 00000000fe200000-00000000fe3fffff
+>
+> > > > >
+>
+> > > > >       00000000fe400000-00000000fe5fffff (prio 1, RW): alias
+>
+> > > > > pci_bridge_mem @pci_bridge_pci
+>
+> > > > > 00000000fe400000-00000000fe5fffff
+>
+> > > > >
+>
+> > > > >       00000000fe600000-00000000fe7fffff (prio 1, RW): alias
+>
+> > > > > pci_bridge_mem @pci_bridge_pci
+>
+> > > > > 00000000fe600000-00000000fe7fffff
+>
+> > > > >
+>
+> > > > >       00000000fe800000-00000000fe9fffff (prio 1, RW): alias
+>
+> > > > > pci_bridge_mem @pci_bridge_pci
+>
+> > > > > 00000000fe800000-00000000fe9fffff
+>
+> > > > >
+>
+> > > > >       fffffffffc800000-fffffffffc800000 (prio 1, RW): alias
+>
+> > pci_bridge_pref_mem
+>
+> > > > > @pci_bridge_pci fffffffffc800000-fffffffffc800000   <- Exceptional
+>
+Adress
+>
+> > > > Space
+>
+> > > >
+>
+> > > > This one is empty though right?
+>
+> > > >
+>
+> > > > >
+>
+> > > > >
+>
+> > > > > We have figured out why this address becomes this value,
+>
+> > > > > according to pci spec,  pci driver can get BAR address size by
+>
+> > > > > writing 0xffffffff to
+>
+> > > > >
+>
+> > > > > the pci register firstly, and then read back the value from this
+>
+> > > > > register.
+>
+> > > >
+>
+> > > >
+>
+> > > > OK however as you show below the BAR being sized is the BAR if a
+>
+> > > > bridge. Are you then adding a bridge device by hotplug?
+>
+> > >
+>
+> > > No, I just simply hot plugged a VFIO device to Bus 0, another
+>
+> > > interesting phenomenon is If I hot plug the device to other bus,
+>
+> > > this doesn't
+>
+> > happened.
+>
+> > >
+>
+> > > >
+>
+> > > >
+>
+> > > > > We didn't handle this value  specially while process pci write
+>
+> > > > > in qemu, the function call stack is:
+>
+> > > > >
+>
+> > > > > Pci_bridge_dev_write_config
+>
+> > > > >
+>
+> > > > > -> pci_bridge_write_config
+>
+> > > > >
+>
+> > > > > -> pci_default_write_config (we update the config[address]
+>
+> > > > > -> value here to
+>
+> > > > > fffffffffc800000, which should be 0xfc800000 )
+>
+> > > > >
+>
+> > > > > -> pci_bridge_update_mappings
+>
+> > > > >
+>
+> > > > >                 ->pci_bridge_region_del(br, br->windows);
+>
+> > > > >
+>
+> > > > > -> pci_bridge_region_init
+>
+> > > > >
+>
+> > > > >
+>
+> > > > > -> pci_bridge_init_alias (here pci_bridge_get_base, we use the
+>
+> > > > > wrong value
+>
+> > > > > fffffffffc800000)
+>
+> > > > >
+>
+> > > > >                                                 ->
+>
+> > > > > memory_region_transaction_commit
+>
+> > > > >
+>
+> > > > >
+>
+> > > > >
+>
+> > > > > So, as we can see, we use the wrong base address in qemu to
+>
+> > > > > update the memory regions, though, we update the base address
+>
+> > > > > to
+>
+> > > > >
+>
+> > > > > The correct value after pci driver in VM write the original
+>
+> > > > > value back, the virtio NIC in bus 4 may still sends net
+>
+> > > > > packets concurrently with
+>
+> > > > >
+>
+> > > > > The wrong memory region address.
+>
+> > > > >
+>
+> > > > >
+>
+> > > > >
+>
+> > > > > We have tried to skip the memory region update action in qemu
+>
+> > > > > while detect pci write with 0xffffffff value, and it does
+>
+> > > > > work, but
+>
+> > > > >
+>
+> > > > > This seems to be not gently.
+>
+> > > >
+>
+> > > > For sure. But I'm still puzzled as to why does Linux try to size
+>
+> > > > the BAR of the bridge while a device behind it is used.
+>
+> > > >
+>
+> > > > Can you pls post your QEMU command line?
+>
+> > >
+>
+> > > My QEMU command line:
+>
+> > > /root/xyd/qemu-system-x86_64 -name guest=Linux,debug-threads=on -S
+>
+> > > -object
+>
+> > > secret,id=masterKey0,format=raw,file=/var/run/libvirt/qemu/domain-
+>
+> > > 194-
+>
+> > > Linux/master-key.aes -machine
+>
+> > > pc-i440fx-2.8,accel=kvm,usb=off,dump-guest-core=off -cpu
+>
+> > > host,+kvm_pv_eoi -bios /usr/share/OVMF/OVMF.fd -m
+>
+> > > size=4194304k,slots=256,maxmem=33554432k -realtime mlock=off -smp
+>
+> > > 20,sockets=20,cores=1,threads=1 -numa
+>
+> > > node,nodeid=0,cpus=0-4,mem=1024 -numa
+>
+> > > node,nodeid=1,cpus=5-9,mem=1024 -numa
+>
+> > > node,nodeid=2,cpus=10-14,mem=1024 -numa
+>
+> > > node,nodeid=3,cpus=15-19,mem=1024 -uuid
+>
+> > > 34a588c7-b0f2-4952-b39c-47fae3411439 -no-user-config -nodefaults
+>
+> > > -chardev
+>
+> > > socket,id=charmonitor,path=/var/run/libvirt/qemu/domain-194-Linux/
+>
+> > > moni
+>
+> > > tor.sock,server,nowait -mon
+>
+> > > chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-hpet
+>
+> > > -global kvm-pit.lost_tick_policy=delay -no-shutdown -boot
+>
+> > > strict=on -device
+>
+> > > pci-bridge,chassis_nr=1,id=pci.1,bus=pci.0,addr=0x8 -device
+>
+> > > pci-bridge,chassis_nr=2,id=pci.2,bus=pci.0,addr=0x9 -device
+>
+> > > pci-bridge,chassis_nr=3,id=pci.3,bus=pci.0,addr=0xa -device
+>
+> > > pci-bridge,chassis_nr=4,id=pci.4,bus=pci.0,addr=0xb -device
+>
+> > > pci-bridge,chassis_nr=5,id=pci.5,bus=pci.0,addr=0xc -device
+>
+> > > piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
+>
+> > > usb-ehci,id=usb1,bus=pci.0,addr=0x10 -device
+>
+> > > nec-usb-xhci,id=usb2,bus=pci.0,addr=0x11 -device
+>
+> > > virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x3 -device
+>
+> > > virtio-scsi-pci,id=scsi1,bus=pci.0,addr=0x4 -device
+>
+> > > virtio-scsi-pci,id=scsi2,bus=pci.0,addr=0x5 -device
+>
+> > > virtio-scsi-pci,id=scsi3,bus=pci.0,addr=0x6 -device
+>
+> > > virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x7 -drive
+>
+> > > file=/mnt/sdb/xml/centos_74_x64_uefi.raw,format=raw,if=none,id=dri
+>
+> > > ve-v
+>
+> > > irtio-disk0,cache=none -device
+>
+> > > virtio-blk-pci,scsi=off,bus=pci.0,addr=0x2,drive=drive-virtio-disk
+>
+> > > 0,id
+>
+> > > =virtio-disk0,bootindex=1 -drive
+>
+> > > if=none,id=drive-ide0-1-1,readonly=on,cache=none -device
+>
+> > > ide-cd,bus=ide.1,unit=1,drive=drive-ide0-1-1,id=ide0-1-1 -netdev
+>
+> > > tap,fd=35,id=hostnet0 -device
+>
+> > > virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:89:5d:8b,bus=p
+>
+> > > ci.4
+>
+> > > ,addr=0x1 -chardev pty,id=charserial0 -device
+>
+> > > isa-serial,chardev=charserial0,id=serial0 -device
+>
+> > > usb-tablet,id=input0,bus=usb.0,port=1 -vnc 0.0.0.0:0 -device
+>
+> > > cirrus-vga,id=video0,vgamem_mb=8,bus=pci.0,addr=0x12 -device
+>
+> > > virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xd -msg
+>
+> > > timestamp=on
+>
+> > >
+>
+> > > I am also very curious about this issue, in the linux kernel code,
+>
+> > > maybe double
+>
+> > check in function pci_bridge_check_ranges triggered this problem.
+>
+> >
+>
+> > If you can get the stacktrace in Linux when it tries to write this
+>
+> > fffff value, that would be quite helpful.
+>
+> >
+>
+>
+>
+> After I add mdelay(100) in function pci_bridge_check_ranges, this
+>
+> phenomenon is easier to reproduce, below is my modify in kernel:
+>
+> diff --git a/drivers/pci/setup-bus.c b/drivers/pci/setup-bus.c index
+>
+> cb389277..86e232d 100644
+>
+> --- a/drivers/pci/setup-bus.c
+>
+> +++ b/drivers/pci/setup-bus.c
+>
+> @@ -27,7 +27,7 @@
+>
+>  #include <linux/slab.h>
+>
+>  #include <linux/acpi.h>
+>
+>  #include "pci.h"
+>
+> -
+>
+> +#include <linux/delay.h>
+>
+>  unsigned int pci_flags;
+>
+>
+>
+>  struct pci_dev_resource {
+>
+> @@ -787,6 +787,9 @@ static void pci_bridge_check_ranges(struct pci_bus
+>
+*bus)
+>
+>                 pci_write_config_dword(bridge, PCI_PREF_BASE_UPPER32,
+>
+>                                                0xffffffff);
+>
+>                 pci_read_config_dword(bridge, PCI_PREF_BASE_UPPER32,
+>
+> &tmp);
+>
+> +               mdelay(100);
+>
+> +               printk(KERN_ERR "sleep\n");
+>
+> +                dump_stack();
+>
+>                 if (!tmp)
+>
+>                         b_res[2].flags &= ~IORESOURCE_MEM_64;
+>
+>                 pci_write_config_dword(bridge, PCI_PREF_BASE_UPPER32,
+>
+>
+>
+>
+OK!
+>
+I just sent a Linux patch that should help.
+>
+I would appreciate it if you will give it a try and if that helps reply to it
+>
+with a
+>
+Tested-by: tag.
+>
+I tested this patch and it works fine on my machine.
+
+But I have another question, if we only fix this problem in the kernel, the 
+Linux
+version that has been released does not work well on the virtualization 
+platform. 
+Is there a way to fix this problem in the backend?
+
+>
+--
+>
+MST
+
+On Tue, Dec 11, 2018 at 02:55:43AM +0000, xuyandong wrote:
+>
+On Tue, Dec 11, 2018 at 01:47:37AM +0000, xuyandong wrote:
+>
+> > On Sat, Dec 08, 2018 at 11:58:59AM +0000, xuyandong wrote:
+>
+> > > > > > Hi all,
+>
+> > > > > >
+>
+> > > > > >
+>
+> > > > > >
+>
+> > > > > > In our test, we configured VM with several pci-bridges and a
+>
+> > > > > > virtio-net nic been attached with bus 4,
+>
+> > > > > >
+>
+> > > > > > After VM is startup, We ping this nic from host to judge if it
+>
+> > > > > > is working normally. Then, we hot add pci devices to this VM with
+>
+> > > > > > bus
+>
+> 0.
+>
+> > > > > >
+>
+> > > > > > We  found the virtio-net NIC in bus 4 is not working (can not
+>
+> > > > > > connect) occasionally, as it kick virtio backend failure with
+>
+> > > > > > error below:
+>
+> > > > > >
+>
+> > > > > >     Unassigned mem write 00000000fc803004 = 0x1
+>
+> > > > > >
+>
+> > > > > >
+>
+> > > > > >
+>
+> > > > > > memory-region: pci_bridge_pci
+>
+> > > > > >
+>
+> > > > > >   0000000000000000-ffffffffffffffff (prio 0, RW):
+>
+> > > > > > pci_bridge_pci
+>
+> > > > > >
+>
+> > > > > >     00000000fc800000-00000000fc803fff (prio 1, RW): virtio-pci
+>
+> > > > > >
+>
+> > > > > >       00000000fc800000-00000000fc800fff (prio 0, RW):
+>
+> > > > > > virtio-pci-common
+>
+> > > > > >
+>
+> > > > > >       00000000fc801000-00000000fc801fff (prio 0, RW):
+>
+> > > > > > virtio-pci-isr
+>
+> > > > > >
+>
+> > > > > >       00000000fc802000-00000000fc802fff (prio 0, RW):
+>
+> > > > > > virtio-pci-device
+>
+> > > > > >
+>
+> > > > > >       00000000fc803000-00000000fc803fff (prio 0, RW):
+>
+> > > > > > virtio-pci-notify  <- io mem unassigned
+>
+> > > > > >
+>
+> > > > > >       …
+>
+> > > > > >
+>
+> > > > > >
+>
+> > > > > >
+>
+> > > > > > We caught an exceptional address changing while this problem
+>
+> > > > > > happened, show as
+>
+> > > > > > follow:
+>
+> > > > > >
+>
+> > > > > > Before pci_bridge_update_mappings:
+>
+> > > > > >
+>
+> > > > > >       00000000fc000000-00000000fc1fffff (prio 1, RW): alias
+>
+> > > > > > pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > > > 00000000fc000000-00000000fc1fffff
+>
+> > > > > >
+>
+> > > > > >       00000000fc200000-00000000fc3fffff (prio 1, RW): alias
+>
+> > > > > > pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > > > 00000000fc200000-00000000fc3fffff
+>
+> > > > > >
+>
+> > > > > >       00000000fc400000-00000000fc5fffff (prio 1, RW): alias
+>
+> > > > > > pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > > > 00000000fc400000-00000000fc5fffff
+>
+> > > > > >
+>
+> > > > > >       00000000fc600000-00000000fc7fffff (prio 1, RW): alias
+>
+> > > > > > pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > > > 00000000fc600000-00000000fc7fffff
+>
+> > > > > >
+>
+> > > > > >       00000000fc800000-00000000fc9fffff (prio 1, RW): alias
+>
+> > > > > > pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > > > 00000000fc800000-00000000fc9fffff
+>
+> > > > > > <- correct Adress Spce
+>
+> > > > > >
+>
+> > > > > >       00000000fca00000-00000000fcbfffff (prio 1, RW): alias
+>
+> > > > > > pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > > > 00000000fca00000-00000000fcbfffff
+>
+> > > > > >
+>
+> > > > > >       00000000fcc00000-00000000fcdfffff (prio 1, RW): alias
+>
+> > > > > > pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > > > 00000000fcc00000-00000000fcdfffff
+>
+> > > > > >
+>
+> > > > > >       00000000fce00000-00000000fcffffff (prio 1, RW): alias
+>
+> > > > > > pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > > > 00000000fce00000-00000000fcffffff
+>
+> > > > > >
+>
+> > > > > >
+>
+> > > > > >
+>
+> > > > > > After pci_bridge_update_mappings:
+>
+> > > > > >
+>
+> > > > > >       00000000fda00000-00000000fdbfffff (prio 1, RW): alias
+>
+> > > > > > pci_bridge_mem @pci_bridge_pci
+>
+> > > > > > 00000000fda00000-00000000fdbfffff
+>
+> > > > > >
+>
+> > > > > >       00000000fdc00000-00000000fddfffff (prio 1, RW): alias
+>
+> > > > > > pci_bridge_mem @pci_bridge_pci
+>
+> > > > > > 00000000fdc00000-00000000fddfffff
+>
+> > > > > >
+>
+> > > > > >       00000000fde00000-00000000fdffffff (prio 1, RW): alias
+>
+> > > > > > pci_bridge_mem @pci_bridge_pci
+>
+> > > > > > 00000000fde00000-00000000fdffffff
+>
+> > > > > >
+>
+> > > > > >       00000000fe000000-00000000fe1fffff (prio 1, RW): alias
+>
+> > > > > > pci_bridge_mem @pci_bridge_pci
+>
+> > > > > > 00000000fe000000-00000000fe1fffff
+>
+> > > > > >
+>
+> > > > > >       00000000fe200000-00000000fe3fffff (prio 1, RW): alias
+>
+> > > > > > pci_bridge_mem @pci_bridge_pci
+>
+> > > > > > 00000000fe200000-00000000fe3fffff
+>
+> > > > > >
+>
+> > > > > >       00000000fe400000-00000000fe5fffff (prio 1, RW): alias
+>
+> > > > > > pci_bridge_mem @pci_bridge_pci
+>
+> > > > > > 00000000fe400000-00000000fe5fffff
+>
+> > > > > >
+>
+> > > > > >       00000000fe600000-00000000fe7fffff (prio 1, RW): alias
+>
+> > > > > > pci_bridge_mem @pci_bridge_pci
+>
+> > > > > > 00000000fe600000-00000000fe7fffff
+>
+> > > > > >
+>
+> > > > > >       00000000fe800000-00000000fe9fffff (prio 1, RW): alias
+>
+> > > > > > pci_bridge_mem @pci_bridge_pci
+>
+> > > > > > 00000000fe800000-00000000fe9fffff
+>
+> > > > > >
+>
+> > > > > >       fffffffffc800000-fffffffffc800000 (prio 1, RW): alias
+>
+> > > pci_bridge_pref_mem
+>
+> > > > > > @pci_bridge_pci fffffffffc800000-fffffffffc800000   <- Exceptional
+>
+> Adress
+>
+> > > > > Space
+>
+> > > > >
+>
+> > > > > This one is empty though right?
+>
+> > > > >
+>
+> > > > > >
+>
+> > > > > >
+>
+> > > > > > We have figured out why this address becomes this value,
+>
+> > > > > > according to pci spec,  pci driver can get BAR address size by
+>
+> > > > > > writing 0xffffffff to
+>
+> > > > > >
+>
+> > > > > > the pci register firstly, and then read back the value from this
+>
+> > > > > > register.
+>
+> > > > >
+>
+> > > > >
+>
+> > > > > OK however as you show below the BAR being sized is the BAR if a
+>
+> > > > > bridge. Are you then adding a bridge device by hotplug?
+>
+> > > >
+>
+> > > > No, I just simply hot plugged a VFIO device to Bus 0, another
+>
+> > > > interesting phenomenon is If I hot plug the device to other bus,
+>
+> > > > this doesn't
+>
+> > > happened.
+>
+> > > >
+>
+> > > > >
+>
+> > > > >
+>
+> > > > > > We didn't handle this value  specially while process pci write
+>
+> > > > > > in qemu, the function call stack is:
+>
+> > > > > >
+>
+> > > > > > Pci_bridge_dev_write_config
+>
+> > > > > >
+>
+> > > > > > -> pci_bridge_write_config
+>
+> > > > > >
+>
+> > > > > > -> pci_default_write_config (we update the config[address]
+>
+> > > > > > -> value here to
+>
+> > > > > > fffffffffc800000, which should be 0xfc800000 )
+>
+> > > > > >
+>
+> > > > > > -> pci_bridge_update_mappings
+>
+> > > > > >
+>
+> > > > > >                 ->pci_bridge_region_del(br, br->windows);
+>
+> > > > > >
+>
+> > > > > > -> pci_bridge_region_init
+>
+> > > > > >
+>
+> > > > > >
+>
+> > > > > > -> pci_bridge_init_alias (here pci_bridge_get_base, we use the
+>
+> > > > > > wrong value
+>
+> > > > > > fffffffffc800000)
+>
+> > > > > >
+>
+> > > > > >                                                 ->
+>
+> > > > > > memory_region_transaction_commit
+>
+> > > > > >
+>
+> > > > > >
+>
+> > > > > >
+>
+> > > > > > So, as we can see, we use the wrong base address in qemu to
+>
+> > > > > > update the memory regions, though, we update the base address
+>
+> > > > > > to
+>
+> > > > > >
+>
+> > > > > > The correct value after pci driver in VM write the original
+>
+> > > > > > value back, the virtio NIC in bus 4 may still sends net
+>
+> > > > > > packets concurrently with
+>
+> > > > > >
+>
+> > > > > > The wrong memory region address.
+>
+> > > > > >
+>
+> > > > > >
+>
+> > > > > >
+>
+> > > > > > We have tried to skip the memory region update action in qemu
+>
+> > > > > > while detect pci write with 0xffffffff value, and it does
+>
+> > > > > > work, but
+>
+> > > > > >
+>
+> > > > > > This seems to be not gently.
+>
+> > > > >
+>
+> > > > > For sure. But I'm still puzzled as to why does Linux try to size
+>
+> > > > > the BAR of the bridge while a device behind it is used.
+>
+> > > > >
+>
+> > > > > Can you pls post your QEMU command line?
+>
+> > > >
+>
+> > > > My QEMU command line:
+>
+> > > > /root/xyd/qemu-system-x86_64 -name guest=Linux,debug-threads=on -S
+>
+> > > > -object
+>
+> > > > secret,id=masterKey0,format=raw,file=/var/run/libvirt/qemu/domain-
+>
+> > > > 194-
+>
+> > > > Linux/master-key.aes -machine
+>
+> > > > pc-i440fx-2.8,accel=kvm,usb=off,dump-guest-core=off -cpu
+>
+> > > > host,+kvm_pv_eoi -bios /usr/share/OVMF/OVMF.fd -m
+>
+> > > > size=4194304k,slots=256,maxmem=33554432k -realtime mlock=off -smp
+>
+> > > > 20,sockets=20,cores=1,threads=1 -numa
+>
+> > > > node,nodeid=0,cpus=0-4,mem=1024 -numa
+>
+> > > > node,nodeid=1,cpus=5-9,mem=1024 -numa
+>
+> > > > node,nodeid=2,cpus=10-14,mem=1024 -numa
+>
+> > > > node,nodeid=3,cpus=15-19,mem=1024 -uuid
+>
+> > > > 34a588c7-b0f2-4952-b39c-47fae3411439 -no-user-config -nodefaults
+>
+> > > > -chardev
+>
+> > > > socket,id=charmonitor,path=/var/run/libvirt/qemu/domain-194-Linux/
+>
+> > > > moni
+>
+> > > > tor.sock,server,nowait -mon
+>
+> > > > chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-hpet
+>
+> > > > -global kvm-pit.lost_tick_policy=delay -no-shutdown -boot
+>
+> > > > strict=on -device
+>
+> > > > pci-bridge,chassis_nr=1,id=pci.1,bus=pci.0,addr=0x8 -device
+>
+> > > > pci-bridge,chassis_nr=2,id=pci.2,bus=pci.0,addr=0x9 -device
+>
+> > > > pci-bridge,chassis_nr=3,id=pci.3,bus=pci.0,addr=0xa -device
+>
+> > > > pci-bridge,chassis_nr=4,id=pci.4,bus=pci.0,addr=0xb -device
+>
+> > > > pci-bridge,chassis_nr=5,id=pci.5,bus=pci.0,addr=0xc -device
+>
+> > > > piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
+>
+> > > > usb-ehci,id=usb1,bus=pci.0,addr=0x10 -device
+>
+> > > > nec-usb-xhci,id=usb2,bus=pci.0,addr=0x11 -device
+>
+> > > > virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x3 -device
+>
+> > > > virtio-scsi-pci,id=scsi1,bus=pci.0,addr=0x4 -device
+>
+> > > > virtio-scsi-pci,id=scsi2,bus=pci.0,addr=0x5 -device
+>
+> > > > virtio-scsi-pci,id=scsi3,bus=pci.0,addr=0x6 -device
+>
+> > > > virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x7 -drive
+>
+> > > > file=/mnt/sdb/xml/centos_74_x64_uefi.raw,format=raw,if=none,id=dri
+>
+> > > > ve-v
+>
+> > > > irtio-disk0,cache=none -device
+>
+> > > > virtio-blk-pci,scsi=off,bus=pci.0,addr=0x2,drive=drive-virtio-disk
+>
+> > > > 0,id
+>
+> > > > =virtio-disk0,bootindex=1 -drive
+>
+> > > > if=none,id=drive-ide0-1-1,readonly=on,cache=none -device
+>
+> > > > ide-cd,bus=ide.1,unit=1,drive=drive-ide0-1-1,id=ide0-1-1 -netdev
+>
+> > > > tap,fd=35,id=hostnet0 -device
+>
+> > > > virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:89:5d:8b,bus=p
+>
+> > > > ci.4
+>
+> > > > ,addr=0x1 -chardev pty,id=charserial0 -device
+>
+> > > > isa-serial,chardev=charserial0,id=serial0 -device
+>
+> > > > usb-tablet,id=input0,bus=usb.0,port=1 -vnc 0.0.0.0:0 -device
+>
+> > > > cirrus-vga,id=video0,vgamem_mb=8,bus=pci.0,addr=0x12 -device
+>
+> > > > virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xd -msg
+>
+> > > > timestamp=on
+>
+> > > >
+>
+> > > > I am also very curious about this issue, in the linux kernel code,
+>
+> > > > maybe double
+>
+> > > check in function pci_bridge_check_ranges triggered this problem.
+>
+> > >
+>
+> > > If you can get the stacktrace in Linux when it tries to write this
+>
+> > > fffff value, that would be quite helpful.
+>
+> > >
+>
+> >
+>
+> > After I add mdelay(100) in function pci_bridge_check_ranges, this
+>
+> > phenomenon is easier to reproduce, below is my modify in kernel:
+>
+> > diff --git a/drivers/pci/setup-bus.c b/drivers/pci/setup-bus.c index
+>
+> > cb389277..86e232d 100644
+>
+> > --- a/drivers/pci/setup-bus.c
+>
+> > +++ b/drivers/pci/setup-bus.c
+>
+> > @@ -27,7 +27,7 @@
+>
+> >  #include <linux/slab.h>
+>
+> >  #include <linux/acpi.h>
+>
+> >  #include "pci.h"
+>
+> > -
+>
+> > +#include <linux/delay.h>
+>
+> >  unsigned int pci_flags;
+>
+> >
+>
+> >  struct pci_dev_resource {
+>
+> > @@ -787,6 +787,9 @@ static void pci_bridge_check_ranges(struct pci_bus
+>
+> *bus)
+>
+> >                 pci_write_config_dword(bridge, PCI_PREF_BASE_UPPER32,
+>
+> >                                                0xffffffff);
+>
+> >                 pci_read_config_dword(bridge, PCI_PREF_BASE_UPPER32,
+>
+> > &tmp);
+>
+> > +               mdelay(100);
+>
+> > +               printk(KERN_ERR "sleep\n");
+>
+> > +                dump_stack();
+>
+> >                 if (!tmp)
+>
+> >                         b_res[2].flags &= ~IORESOURCE_MEM_64;
+>
+> >                 pci_write_config_dword(bridge, PCI_PREF_BASE_UPPER32,
+>
+> >
+>
+>
+>
+> OK!
+>
+> I just sent a Linux patch that should help.
+>
+> I would appreciate it if you will give it a try and if that helps reply to
+>
+> it with a
+>
+> Tested-by: tag.
+>
+>
+>
+>
+I tested this patch and it works fine on my machine.
+>
+>
+But I have another question, if we only fix this problem in the kernel, the
+>
+Linux
+>
+version that has been released does not work well on the virtualization
+>
+platform.
+>
+Is there a way to fix this problem in the backend?
+There could we a way to work around this.
+Does below help?
+
+diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c
+index 236a20eaa8..7834cac4b0 100644
+--- a/hw/i386/acpi-build.c
++++ b/hw/i386/acpi-build.c
+@@ -551,7 +551,7 @@ static void build_append_pci_bus_devices(Aml *parent_scope, 
+PCIBus *bus,
+ 
+         aml_append(method, aml_store(aml_int(bsel_val), aml_name("BNUM")));
+         aml_append(method,
+-            aml_call2("DVNT", aml_name("PCIU"), aml_int(1) /* Device Check */)
++            aml_call2("DVNT", aml_name("PCIU"), aml_int(4) /* Device Check 
+Light */)
+         );
+         aml_append(method,
+             aml_call2("DVNT", aml_name("PCID"), aml_int(3)/* Eject Request */)
+
+>
+On Tue, Dec 11, 2018 at 02:55:43AM +0000, xuyandong wrote:
+>
+> On Tue, Dec 11, 2018 at 01:47:37AM +0000, xuyandong wrote:
+>
+> > > On Sat, Dec 08, 2018 at 11:58:59AM +0000, xuyandong wrote:
+>
+> > > > > > > Hi all,
+>
+> > > > > > >
+>
+> > > > > > >
+>
+> > > > > > >
+>
+> > > > > > > In our test, we configured VM with several pci-bridges and
+>
+> > > > > > > a virtio-net nic been attached with bus 4,
+>
+> > > > > > >
+>
+> > > > > > > After VM is startup, We ping this nic from host to judge
+>
+> > > > > > > if it is working normally. Then, we hot add pci devices to
+>
+> > > > > > > this VM with bus
+>
+> > 0.
+>
+> > > > > > >
+>
+> > > > > > > We  found the virtio-net NIC in bus 4 is not working (can
+>
+> > > > > > > not
+>
+> > > > > > > connect) occasionally, as it kick virtio backend failure with
+>
+> > > > > > > error
+>
+below:
+>
+> > > > > > >
+>
+> > > > > > >     Unassigned mem write 00000000fc803004 = 0x1
+>
+> > > > > > >
+>
+> > > > > > >
+>
+> > > > > > >
+>
+> > > > > > > memory-region: pci_bridge_pci
+>
+> > > > > > >
+>
+> > > > > > >   0000000000000000-ffffffffffffffff (prio 0, RW):
+>
+> > > > > > > pci_bridge_pci
+>
+> > > > > > >
+>
+> > > > > > >     00000000fc800000-00000000fc803fff (prio 1, RW):
+>
+> > > > > > > virtio-pci
+>
+> > > > > > >
+>
+> > > > > > >       00000000fc800000-00000000fc800fff (prio 0, RW):
+>
+> > > > > > > virtio-pci-common
+>
+> > > > > > >
+>
+> > > > > > >       00000000fc801000-00000000fc801fff (prio 0, RW):
+>
+> > > > > > > virtio-pci-isr
+>
+> > > > > > >
+>
+> > > > > > >       00000000fc802000-00000000fc802fff (prio 0, RW):
+>
+> > > > > > > virtio-pci-device
+>
+> > > > > > >
+>
+> > > > > > >       00000000fc803000-00000000fc803fff (prio 0, RW):
+>
+> > > > > > > virtio-pci-notify  <- io mem unassigned
+>
+> > > > > > >
+>
+> > > > > > >       …
+>
+> > > > > > >
+>
+> > > > > > >
+>
+> > > > > > >
+>
+> > > > > > > We caught an exceptional address changing while this
+>
+> > > > > > > problem happened, show as
+>
+> > > > > > > follow:
+>
+> > > > > > >
+>
+> > > > > > > Before pci_bridge_update_mappings:
+>
+> > > > > > >
+>
+> > > > > > >       00000000fc000000-00000000fc1fffff (prio 1, RW):
+>
+> > > > > > > alias pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > > > > 00000000fc000000-00000000fc1fffff
+>
+> > > > > > >
+>
+> > > > > > >       00000000fc200000-00000000fc3fffff (prio 1, RW):
+>
+> > > > > > > alias pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > > > > 00000000fc200000-00000000fc3fffff
+>
+> > > > > > >
+>
+> > > > > > >       00000000fc400000-00000000fc5fffff (prio 1, RW):
+>
+> > > > > > > alias pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > > > > 00000000fc400000-00000000fc5fffff
+>
+> > > > > > >
+>
+> > > > > > >       00000000fc600000-00000000fc7fffff (prio 1, RW):
+>
+> > > > > > > alias pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > > > > 00000000fc600000-00000000fc7fffff
+>
+> > > > > > >
+>
+> > > > > > >       00000000fc800000-00000000fc9fffff (prio 1, RW):
+>
+> > > > > > > alias pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > > > > 00000000fc800000-00000000fc9fffff
+>
+> > > > > > > <- correct Adress Spce
+>
+> > > > > > >
+>
+> > > > > > >       00000000fca00000-00000000fcbfffff (prio 1, RW):
+>
+> > > > > > > alias pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > > > > 00000000fca00000-00000000fcbfffff
+>
+> > > > > > >
+>
+> > > > > > >       00000000fcc00000-00000000fcdfffff (prio 1, RW):
+>
+> > > > > > > alias pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > > > > 00000000fcc00000-00000000fcdfffff
+>
+> > > > > > >
+>
+> > > > > > >       00000000fce00000-00000000fcffffff (prio 1, RW):
+>
+> > > > > > > alias pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > > > > 00000000fce00000-00000000fcffffff
+>
+> > > > > > >
+>
+> > > > > > >
+>
+> > > > > > >
+>
+> > > > > > > After pci_bridge_update_mappings:
+>
+> > > > > > >
+>
+> > > > > > >       00000000fda00000-00000000fdbfffff (prio 1, RW):
+>
+> > > > > > > alias pci_bridge_mem @pci_bridge_pci
+>
+> > > > > > > 00000000fda00000-00000000fdbfffff
+>
+> > > > > > >
+>
+> > > > > > >       00000000fdc00000-00000000fddfffff (prio 1, RW):
+>
+> > > > > > > alias pci_bridge_mem @pci_bridge_pci
+>
+> > > > > > > 00000000fdc00000-00000000fddfffff
+>
+> > > > > > >
+>
+> > > > > > >       00000000fde00000-00000000fdffffff (prio 1, RW):
+>
+> > > > > > > alias pci_bridge_mem @pci_bridge_pci
+>
+> > > > > > > 00000000fde00000-00000000fdffffff
+>
+> > > > > > >
+>
+> > > > > > >       00000000fe000000-00000000fe1fffff (prio 1, RW):
+>
+> > > > > > > alias pci_bridge_mem @pci_bridge_pci
+>
+> > > > > > > 00000000fe000000-00000000fe1fffff
+>
+> > > > > > >
+>
+> > > > > > >       00000000fe200000-00000000fe3fffff (prio 1, RW):
+>
+> > > > > > > alias pci_bridge_mem @pci_bridge_pci
+>
+> > > > > > > 00000000fe200000-00000000fe3fffff
+>
+> > > > > > >
+>
+> > > > > > >       00000000fe400000-00000000fe5fffff (prio 1, RW):
+>
+> > > > > > > alias pci_bridge_mem @pci_bridge_pci
+>
+> > > > > > > 00000000fe400000-00000000fe5fffff
+>
+> > > > > > >
+>
+> > > > > > >       00000000fe600000-00000000fe7fffff (prio 1, RW):
+>
+> > > > > > > alias pci_bridge_mem @pci_bridge_pci
+>
+> > > > > > > 00000000fe600000-00000000fe7fffff
+>
+> > > > > > >
+>
+> > > > > > >       00000000fe800000-00000000fe9fffff (prio 1, RW):
+>
+> > > > > > > alias pci_bridge_mem @pci_bridge_pci
+>
+> > > > > > > 00000000fe800000-00000000fe9fffff
+>
+> > > > > > >
+>
+> > > > > > >       fffffffffc800000-fffffffffc800000 (prio 1, RW):
+>
+> > > > > > > alias
+>
+> > > > pci_bridge_pref_mem
+>
+> > > > > > > @pci_bridge_pci fffffffffc800000-fffffffffc800000   <-
+>
+> > > > > > > Exceptional
+>
+> > Adress
+>
+> > > > > > Space
+>
+> > > > > >
+>
+> > > > > > This one is empty though right?
+>
+> > > > > >
+>
+> > > > > > >
+>
+> > > > > > >
+>
+> > > > > > > We have figured out why this address becomes this value,
+>
+> > > > > > > according to pci spec,  pci driver can get BAR address
+>
+> > > > > > > size by writing 0xffffffff to
+>
+> > > > > > >
+>
+> > > > > > > the pci register firstly, and then read back the value from this
+>
+register.
+>
+> > > > > >
+>
+> > > > > >
+>
+> > > > > > OK however as you show below the BAR being sized is the BAR
+>
+> > > > > > if a bridge. Are you then adding a bridge device by hotplug?
+>
+> > > > >
+>
+> > > > > No, I just simply hot plugged a VFIO device to Bus 0, another
+>
+> > > > > interesting phenomenon is If I hot plug the device to other
+>
+> > > > > bus, this doesn't
+>
+> > > > happened.
+>
+> > > > >
+>
+> > > > > >
+>
+> > > > > >
+>
+> > > > > > > We didn't handle this value  specially while process pci
+>
+> > > > > > > write in qemu, the function call stack is:
+>
+> > > > > > >
+>
+> > > > > > > Pci_bridge_dev_write_config
+>
+> > > > > > >
+>
+> > > > > > > -> pci_bridge_write_config
+>
+> > > > > > >
+>
+> > > > > > > -> pci_default_write_config (we update the config[address]
+>
+> > > > > > > -> value here to
+>
+> > > > > > > fffffffffc800000, which should be 0xfc800000 )
+>
+> > > > > > >
+>
+> > > > > > > -> pci_bridge_update_mappings
+>
+> > > > > > >
+>
+> > > > > > >                 ->pci_bridge_region_del(br, br->windows);
+>
+> > > > > > >
+>
+> > > > > > > -> pci_bridge_region_init
+>
+> > > > > > >
+>
+> > > > > > >
+>
+> > > > > > > -> pci_bridge_init_alias (here pci_bridge_get_base, we use
+>
+> > > > > > > -> the
+>
+> > > > > > > wrong value
+>
+> > > > > > > fffffffffc800000)
+>
+> > > > > > >
+>
+> > > > > > >                                                 ->
+>
+> > > > > > > memory_region_transaction_commit
+>
+> > > > > > >
+>
+> > > > > > >
+>
+> > > > > > >
+>
+> > > > > > > So, as we can see, we use the wrong base address in qemu
+>
+> > > > > > > to update the memory regions, though, we update the base
+>
+> > > > > > > address to
+>
+> > > > > > >
+>
+> > > > > > > The correct value after pci driver in VM write the
+>
+> > > > > > > original value back, the virtio NIC in bus 4 may still
+>
+> > > > > > > sends net packets concurrently with
+>
+> > > > > > >
+>
+> > > > > > > The wrong memory region address.
+>
+> > > > > > >
+>
+> > > > > > >
+>
+> > > > > > >
+>
+> > > > > > > We have tried to skip the memory region update action in
+>
+> > > > > > > qemu while detect pci write with 0xffffffff value, and it
+>
+> > > > > > > does work, but
+>
+> > > > > > >
+>
+> > > > > > > This seems to be not gently.
+>
+> > > > > >
+>
+> > > > > > For sure. But I'm still puzzled as to why does Linux try to
+>
+> > > > > > size the BAR of the bridge while a device behind it is used.
+>
+> > > > > >
+>
+> > > > > > Can you pls post your QEMU command line?
+>
+> > > > >
+>
+> > > > > My QEMU command line:
+>
+> > > > > /root/xyd/qemu-system-x86_64 -name
+>
+> > > > > guest=Linux,debug-threads=on -S -object
+>
+> > > > > secret,id=masterKey0,format=raw,file=/var/run/libvirt/qemu/dom
+>
+> > > > > ain-
+>
+> > > > > 194-
+>
+> > > > > Linux/master-key.aes -machine
+>
+> > > > > pc-i440fx-2.8,accel=kvm,usb=off,dump-guest-core=off -cpu
+>
+> > > > > host,+kvm_pv_eoi -bios /usr/share/OVMF/OVMF.fd -m
+>
+> > > > > size=4194304k,slots=256,maxmem=33554432k -realtime mlock=off
+>
+> > > > > -smp
+>
+> > > > > 20,sockets=20,cores=1,threads=1 -numa
+>
+> > > > > node,nodeid=0,cpus=0-4,mem=1024 -numa
+>
+> > > > > node,nodeid=1,cpus=5-9,mem=1024 -numa
+>
+> > > > > node,nodeid=2,cpus=10-14,mem=1024 -numa
+>
+> > > > > node,nodeid=3,cpus=15-19,mem=1024 -uuid
+>
+> > > > > 34a588c7-b0f2-4952-b39c-47fae3411439 -no-user-config
+>
+> > > > > -nodefaults -chardev
+>
+> > > > > socket,id=charmonitor,path=/var/run/libvirt/qemu/domain-194-Li
+>
+> > > > > nux/
+>
+> > > > > moni
+>
+> > > > > tor.sock,server,nowait -mon
+>
+> > > > > chardev=charmonitor,id=monitor,mode=control -rtc base=utc
+>
+> > > > > -no-hpet -global kvm-pit.lost_tick_policy=delay -no-shutdown
+>
+> > > > > -boot strict=on -device
+>
+> > > > > pci-bridge,chassis_nr=1,id=pci.1,bus=pci.0,addr=0x8 -device
+>
+> > > > > pci-bridge,chassis_nr=2,id=pci.2,bus=pci.0,addr=0x9 -device
+>
+> > > > > pci-bridge,chassis_nr=3,id=pci.3,bus=pci.0,addr=0xa -device
+>
+> > > > > pci-bridge,chassis_nr=4,id=pci.4,bus=pci.0,addr=0xb -device
+>
+> > > > > pci-bridge,chassis_nr=5,id=pci.5,bus=pci.0,addr=0xc -device
+>
+> > > > > piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
+>
+> > > > > usb-ehci,id=usb1,bus=pci.0,addr=0x10 -device
+>
+> > > > > nec-usb-xhci,id=usb2,bus=pci.0,addr=0x11 -device
+>
+> > > > > virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x3 -device
+>
+> > > > > virtio-scsi-pci,id=scsi1,bus=pci.0,addr=0x4 -device
+>
+> > > > > virtio-scsi-pci,id=scsi2,bus=pci.0,addr=0x5 -device
+>
+> > > > > virtio-scsi-pci,id=scsi3,bus=pci.0,addr=0x6 -device
+>
+> > > > > virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x7 -drive
+>
+> > > > > file=/mnt/sdb/xml/centos_74_x64_uefi.raw,format=raw,if=none,id
+>
+> > > > > =dri
+>
+> > > > > ve-v
+>
+> > > > > irtio-disk0,cache=none -device
+>
+> > > > > virtio-blk-pci,scsi=off,bus=pci.0,addr=0x2,drive=drive-virtio-
+>
+> > > > > disk
+>
+> > > > > 0,id
+>
+> > > > > =virtio-disk0,bootindex=1 -drive
+>
+> > > > > if=none,id=drive-ide0-1-1,readonly=on,cache=none -device
+>
+> > > > > ide-cd,bus=ide.1,unit=1,drive=drive-ide0-1-1,id=ide0-1-1
+>
+> > > > > -netdev
+>
+> > > > > tap,fd=35,id=hostnet0 -device
+>
+> > > > > virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:89:5d:8b,b
+>
+> > > > > us=p
+>
+> > > > > ci.4
+>
+> > > > > ,addr=0x1 -chardev pty,id=charserial0 -device
+>
+> > > > > isa-serial,chardev=charserial0,id=serial0 -device
+>
+> > > > > usb-tablet,id=input0,bus=usb.0,port=1 -vnc 0.0.0.0:0 -device
+>
+> > > > > cirrus-vga,id=video0,vgamem_mb=8,bus=pci.0,addr=0x12 -device
+>
+> > > > > virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xd -msg
+>
+> > > > > timestamp=on
+>
+> > > > >
+>
+> > > > > I am also very curious about this issue, in the linux kernel
+>
+> > > > > code, maybe double
+>
+> > > > check in function pci_bridge_check_ranges triggered this problem.
+>
+> > > >
+>
+> > > > If you can get the stacktrace in Linux when it tries to write
+>
+> > > > this fffff value, that would be quite helpful.
+>
+> > > >
+>
+> > >
+>
+> > > After I add mdelay(100) in function pci_bridge_check_ranges, this
+>
+> > > phenomenon is easier to reproduce, below is my modify in kernel:
+>
+> > > diff --git a/drivers/pci/setup-bus.c b/drivers/pci/setup-bus.c
+>
+> > > index cb389277..86e232d 100644
+>
+> > > --- a/drivers/pci/setup-bus.c
+>
+> > > +++ b/drivers/pci/setup-bus.c
+>
+> > > @@ -27,7 +27,7 @@
+>
+> > >  #include <linux/slab.h>
+>
+> > >  #include <linux/acpi.h>
+>
+> > >  #include "pci.h"
+>
+> > > -
+>
+> > > +#include <linux/delay.h>
+>
+> > >  unsigned int pci_flags;
+>
+> > >
+>
+> > >  struct pci_dev_resource {
+>
+> > > @@ -787,6 +787,9 @@ static void pci_bridge_check_ranges(struct
+>
+> > > pci_bus
+>
+> > *bus)
+>
+> > >                 pci_write_config_dword(bridge, PCI_PREF_BASE_UPPER32,
+>
+> > >                                                0xffffffff);
+>
+> > >                 pci_read_config_dword(bridge,
+>
+> > > PCI_PREF_BASE_UPPER32, &tmp);
+>
+> > > +               mdelay(100);
+>
+> > > +               printk(KERN_ERR "sleep\n");
+>
+> > > +                dump_stack();
+>
+> > >                 if (!tmp)
+>
+> > >                         b_res[2].flags &= ~IORESOURCE_MEM_64;
+>
+> > >                 pci_write_config_dword(bridge,
+>
+> > > PCI_PREF_BASE_UPPER32,
+>
+> > >
+>
+> >
+>
+> > OK!
+>
+> > I just sent a Linux patch that should help.
+>
+> > I would appreciate it if you will give it a try and if that helps
+>
+> > reply to it with a
+>
+> > Tested-by: tag.
+>
+> >
+>
+>
+>
+> I tested this patch and it works fine on my machine.
+>
+>
+>
+> But I have another question, if we only fix this problem in the
+>
+> kernel, the Linux version that has been released does not work well on the
+>
+virtualization platform.
+>
+> Is there a way to fix this problem in the backend?
+>
+>
+There could we a way to work around this.
+>
+Does below help?
+I am sorry to tell you, I tested this patch and it doesn't work fine.
+
+>
+>
+diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c index
+>
+236a20eaa8..7834cac4b0 100644
+>
+--- a/hw/i386/acpi-build.c
+>
++++ b/hw/i386/acpi-build.c
+>
+@@ -551,7 +551,7 @@ static void build_append_pci_bus_devices(Aml
+>
+*parent_scope, PCIBus *bus,
+>
+>
+aml_append(method, aml_store(aml_int(bsel_val), aml_name("BNUM")));
+>
+aml_append(method,
+>
+-            aml_call2("DVNT", aml_name("PCIU"), aml_int(1) /* Device Check
+>
+*/)
+>
++            aml_call2("DVNT", aml_name("PCIU"), aml_int(4) /* Device
+>
++ Check Light */)
+>
+);
+>
+aml_append(method,
+>
+aml_call2("DVNT", aml_name("PCID"), aml_int(3)/* Eject Request
+>
+*/)
+
+On Tue, Dec 11, 2018 at 03:51:09AM +0000, xuyandong wrote:
+>
+> There could we a way to work around this.
+>
+> Does below help?
+>
+>
+I am sorry to tell you, I tested this patch and it doesn't work fine.
+What happens?
+
+>
+>
+>
+> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c index
+>
+> 236a20eaa8..7834cac4b0 100644
+>
+> --- a/hw/i386/acpi-build.c
+>
+> +++ b/hw/i386/acpi-build.c
+>
+> @@ -551,7 +551,7 @@ static void build_append_pci_bus_devices(Aml
+>
+> *parent_scope, PCIBus *bus,
+>
+>
+>
+>          aml_append(method, aml_store(aml_int(bsel_val), aml_name("BNUM")));
+>
+>          aml_append(method,
+>
+> -            aml_call2("DVNT", aml_name("PCIU"), aml_int(1) /* Device Check
+>
+> */)
+>
+> +            aml_call2("DVNT", aml_name("PCIU"), aml_int(4) /* Device
+>
+> + Check Light */)
+>
+>          );
+>
+>          aml_append(method,
+>
+>              aml_call2("DVNT", aml_name("PCID"), aml_int(3)/* Eject Request
+>
+> */)
+
+On Tue, Dec 11, 2018 at 03:51:09AM +0000, xuyandong wrote:
+>
+> On Tue, Dec 11, 2018 at 02:55:43AM +0000, xuyandong wrote:
+>
+> > On Tue, Dec 11, 2018 at 01:47:37AM +0000, xuyandong wrote:
+>
+> > > > On Sat, Dec 08, 2018 at 11:58:59AM +0000, xuyandong wrote:
+>
+> > > > > > > > Hi all,
+>
+> > > > > > > >
+>
+> > > > > > > >
+>
+> > > > > > > >
+>
+> > > > > > > > In our test, we configured VM with several pci-bridges and
+>
+> > > > > > > > a virtio-net nic been attached with bus 4,
+>
+> > > > > > > >
+>
+> > > > > > > > After VM is startup, We ping this nic from host to judge
+>
+> > > > > > > > if it is working normally. Then, we hot add pci devices to
+>
+> > > > > > > > this VM with bus
+>
+> > > 0.
+>
+> > > > > > > >
+>
+> > > > > > > > We  found the virtio-net NIC in bus 4 is not working (can
+>
+> > > > > > > > not
+>
+> > > > > > > > connect) occasionally, as it kick virtio backend failure with
+>
+> > > > > > > > error
+>
+> below:
+>
+> > > > > > > >
+>
+> > > > > > > >     Unassigned mem write 00000000fc803004 = 0x1
+>
+> > > > > > > >
+>
+> > > > > > > >
+>
+> > > > > > > >
+>
+> > > > > > > > memory-region: pci_bridge_pci
+>
+> > > > > > > >
+>
+> > > > > > > >   0000000000000000-ffffffffffffffff (prio 0, RW):
+>
+> > > > > > > > pci_bridge_pci
+>
+> > > > > > > >
+>
+> > > > > > > >     00000000fc800000-00000000fc803fff (prio 1, RW):
+>
+> > > > > > > > virtio-pci
+>
+> > > > > > > >
+>
+> > > > > > > >       00000000fc800000-00000000fc800fff (prio 0, RW):
+>
+> > > > > > > > virtio-pci-common
+>
+> > > > > > > >
+>
+> > > > > > > >       00000000fc801000-00000000fc801fff (prio 0, RW):
+>
+> > > > > > > > virtio-pci-isr
+>
+> > > > > > > >
+>
+> > > > > > > >       00000000fc802000-00000000fc802fff (prio 0, RW):
+>
+> > > > > > > > virtio-pci-device
+>
+> > > > > > > >
+>
+> > > > > > > >       00000000fc803000-00000000fc803fff (prio 0, RW):
+>
+> > > > > > > > virtio-pci-notify  <- io mem unassigned
+>
+> > > > > > > >
+>
+> > > > > > > >       …
+>
+> > > > > > > >
+>
+> > > > > > > >
+>
+> > > > > > > >
+>
+> > > > > > > > We caught an exceptional address changing while this
+>
+> > > > > > > > problem happened, show as
+>
+> > > > > > > > follow:
+>
+> > > > > > > >
+>
+> > > > > > > > Before pci_bridge_update_mappings:
+>
+> > > > > > > >
+>
+> > > > > > > >       00000000fc000000-00000000fc1fffff (prio 1, RW):
+>
+> > > > > > > > alias pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > > > > > 00000000fc000000-00000000fc1fffff
+>
+> > > > > > > >
+>
+> > > > > > > >       00000000fc200000-00000000fc3fffff (prio 1, RW):
+>
+> > > > > > > > alias pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > > > > > 00000000fc200000-00000000fc3fffff
+>
+> > > > > > > >
+>
+> > > > > > > >       00000000fc400000-00000000fc5fffff (prio 1, RW):
+>
+> > > > > > > > alias pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > > > > > 00000000fc400000-00000000fc5fffff
+>
+> > > > > > > >
+>
+> > > > > > > >       00000000fc600000-00000000fc7fffff (prio 1, RW):
+>
+> > > > > > > > alias pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > > > > > 00000000fc600000-00000000fc7fffff
+>
+> > > > > > > >
+>
+> > > > > > > >       00000000fc800000-00000000fc9fffff (prio 1, RW):
+>
+> > > > > > > > alias pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > > > > > 00000000fc800000-00000000fc9fffff
+>
+> > > > > > > > <- correct Adress Spce
+>
+> > > > > > > >
+>
+> > > > > > > >       00000000fca00000-00000000fcbfffff (prio 1, RW):
+>
+> > > > > > > > alias pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > > > > > 00000000fca00000-00000000fcbfffff
+>
+> > > > > > > >
+>
+> > > > > > > >       00000000fcc00000-00000000fcdfffff (prio 1, RW):
+>
+> > > > > > > > alias pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > > > > > 00000000fcc00000-00000000fcdfffff
+>
+> > > > > > > >
+>
+> > > > > > > >       00000000fce00000-00000000fcffffff (prio 1, RW):
+>
+> > > > > > > > alias pci_bridge_pref_mem @pci_bridge_pci
+>
+> > > > > > > > 00000000fce00000-00000000fcffffff
+>
+> > > > > > > >
+>
+> > > > > > > >
+>
+> > > > > > > >
+>
+> > > > > > > > After pci_bridge_update_mappings:
+>
+> > > > > > > >
+>
+> > > > > > > >       00000000fda00000-00000000fdbfffff (prio 1, RW):
+>
+> > > > > > > > alias pci_bridge_mem @pci_bridge_pci
+>
+> > > > > > > > 00000000fda00000-00000000fdbfffff
+>
+> > > > > > > >
+>
+> > > > > > > >       00000000fdc00000-00000000fddfffff (prio 1, RW):
+>
+> > > > > > > > alias pci_bridge_mem @pci_bridge_pci
+>
+> > > > > > > > 00000000fdc00000-00000000fddfffff
+>
+> > > > > > > >
+>
+> > > > > > > >       00000000fde00000-00000000fdffffff (prio 1, RW):
+>
+> > > > > > > > alias pci_bridge_mem @pci_bridge_pci
+>
+> > > > > > > > 00000000fde00000-00000000fdffffff
+>
+> > > > > > > >
+>
+> > > > > > > >       00000000fe000000-00000000fe1fffff (prio 1, RW):
+>
+> > > > > > > > alias pci_bridge_mem @pci_bridge_pci
+>
+> > > > > > > > 00000000fe000000-00000000fe1fffff
+>
+> > > > > > > >
+>
+> > > > > > > >       00000000fe200000-00000000fe3fffff (prio 1, RW):
+>
+> > > > > > > > alias pci_bridge_mem @pci_bridge_pci
+>
+> > > > > > > > 00000000fe200000-00000000fe3fffff
+>
+> > > > > > > >
+>
+> > > > > > > >       00000000fe400000-00000000fe5fffff (prio 1, RW):
+>
+> > > > > > > > alias pci_bridge_mem @pci_bridge_pci
+>
+> > > > > > > > 00000000fe400000-00000000fe5fffff
+>
+> > > > > > > >
+>
+> > > > > > > >       00000000fe600000-00000000fe7fffff (prio 1, RW):
+>
+> > > > > > > > alias pci_bridge_mem @pci_bridge_pci
+>
+> > > > > > > > 00000000fe600000-00000000fe7fffff
+>
+> > > > > > > >
+>
+> > > > > > > >       00000000fe800000-00000000fe9fffff (prio 1, RW):
+>
+> > > > > > > > alias pci_bridge_mem @pci_bridge_pci
+>
+> > > > > > > > 00000000fe800000-00000000fe9fffff
+>
+> > > > > > > >
+>
+> > > > > > > >       fffffffffc800000-fffffffffc800000 (prio 1, RW):
+>
+> > > > > > > > alias
+>
+> > > > > pci_bridge_pref_mem
+>
+> > > > > > > > @pci_bridge_pci fffffffffc800000-fffffffffc800000   <-
+>
+> > > > > > > > Exceptional
+>
+> > > Adress
+>
+> > > > > > > Space
+>
+> > > > > > >
+>
+> > > > > > > This one is empty though right?
+>
+> > > > > > >
+>
+> > > > > > > >
+>
+> > > > > > > >
+>
+> > > > > > > > We have figured out why this address becomes this value,
+>
+> > > > > > > > according to pci spec,  pci driver can get BAR address
+>
+> > > > > > > > size by writing 0xffffffff to
+>
+> > > > > > > >
+>
+> > > > > > > > the pci register firstly, and then read back the value from
+>
+> > > > > > > > this
+>
+> register.
+>
+> > > > > > >
+>
+> > > > > > >
+>
+> > > > > > > OK however as you show below the BAR being sized is the BAR
+>
+> > > > > > > if a bridge. Are you then adding a bridge device by hotplug?
+>
+> > > > > >
+>
+> > > > > > No, I just simply hot plugged a VFIO device to Bus 0, another
+>
+> > > > > > interesting phenomenon is If I hot plug the device to other
+>
+> > > > > > bus, this doesn't
+>
+> > > > > happened.
+>
+> > > > > >
+>
+> > > > > > >
+>
+> > > > > > >
+>
+> > > > > > > > We didn't handle this value  specially while process pci
+>
+> > > > > > > > write in qemu, the function call stack is:
+>
+> > > > > > > >
+>
+> > > > > > > > Pci_bridge_dev_write_config
+>
+> > > > > > > >
+>
+> > > > > > > > -> pci_bridge_write_config
+>
+> > > > > > > >
+>
+> > > > > > > > -> pci_default_write_config (we update the config[address]
+>
+> > > > > > > > -> value here to
+>
+> > > > > > > > fffffffffc800000, which should be 0xfc800000 )
+>
+> > > > > > > >
+>
+> > > > > > > > -> pci_bridge_update_mappings
+>
+> > > > > > > >
+>
+> > > > > > > >                 ->pci_bridge_region_del(br, br->windows);
+>
+> > > > > > > >
+>
+> > > > > > > > -> pci_bridge_region_init
+>
+> > > > > > > >
+>
+> > > > > > > >
+>
+> > > > > > > > -> pci_bridge_init_alias (here pci_bridge_get_base, we use
+>
+> > > > > > > > -> the
+>
+> > > > > > > > wrong value
+>
+> > > > > > > > fffffffffc800000)
+>
+> > > > > > > >
+>
+> > > > > > > >                                                 ->
+>
+> > > > > > > > memory_region_transaction_commit
+>
+> > > > > > > >
+>
+> > > > > > > >
+>
+> > > > > > > >
+>
+> > > > > > > > So, as we can see, we use the wrong base address in qemu
+>
+> > > > > > > > to update the memory regions, though, we update the base
+>
+> > > > > > > > address to
+>
+> > > > > > > >
+>
+> > > > > > > > The correct value after pci driver in VM write the
+>
+> > > > > > > > original value back, the virtio NIC in bus 4 may still
+>
+> > > > > > > > sends net packets concurrently with
+>
+> > > > > > > >
+>
+> > > > > > > > The wrong memory region address.
+>
+> > > > > > > >
+>
+> > > > > > > >
+>
+> > > > > > > >
+>
+> > > > > > > > We have tried to skip the memory region update action in
+>
+> > > > > > > > qemu while detect pci write with 0xffffffff value, and it
+>
+> > > > > > > > does work, but
+>
+> > > > > > > >
+>
+> > > > > > > > This seems to be not gently.
+>
+> > > > > > >
+>
+> > > > > > > For sure. But I'm still puzzled as to why does Linux try to
+>
+> > > > > > > size the BAR of the bridge while a device behind it is used.
+>
+> > > > > > >
+>
+> > > > > > > Can you pls post your QEMU command line?
+>
+> > > > > >
+>
+> > > > > > My QEMU command line:
+>
+> > > > > > /root/xyd/qemu-system-x86_64 -name
+>
+> > > > > > guest=Linux,debug-threads=on -S -object
+>
+> > > > > > secret,id=masterKey0,format=raw,file=/var/run/libvirt/qemu/dom
+>
+> > > > > > ain-
+>
+> > > > > > 194-
+>
+> > > > > > Linux/master-key.aes -machine
+>
+> > > > > > pc-i440fx-2.8,accel=kvm,usb=off,dump-guest-core=off -cpu
+>
+> > > > > > host,+kvm_pv_eoi -bios /usr/share/OVMF/OVMF.fd -m
+>
+> > > > > > size=4194304k,slots=256,maxmem=33554432k -realtime mlock=off
+>
+> > > > > > -smp
+>
+> > > > > > 20,sockets=20,cores=1,threads=1 -numa
+>
+> > > > > > node,nodeid=0,cpus=0-4,mem=1024 -numa
+>
+> > > > > > node,nodeid=1,cpus=5-9,mem=1024 -numa
+>
+> > > > > > node,nodeid=2,cpus=10-14,mem=1024 -numa
+>
+> > > > > > node,nodeid=3,cpus=15-19,mem=1024 -uuid
+>
+> > > > > > 34a588c7-b0f2-4952-b39c-47fae3411439 -no-user-config
+>
+> > > > > > -nodefaults -chardev
+>
+> > > > > > socket,id=charmonitor,path=/var/run/libvirt/qemu/domain-194-Li
+>
+> > > > > > nux/
+>
+> > > > > > moni
+>
+> > > > > > tor.sock,server,nowait -mon
+>
+> > > > > > chardev=charmonitor,id=monitor,mode=control -rtc base=utc
+>
+> > > > > > -no-hpet -global kvm-pit.lost_tick_policy=delay -no-shutdown
+>
+> > > > > > -boot strict=on -device
+>
+> > > > > > pci-bridge,chassis_nr=1,id=pci.1,bus=pci.0,addr=0x8 -device
+>
+> > > > > > pci-bridge,chassis_nr=2,id=pci.2,bus=pci.0,addr=0x9 -device
+>
+> > > > > > pci-bridge,chassis_nr=3,id=pci.3,bus=pci.0,addr=0xa -device
+>
+> > > > > > pci-bridge,chassis_nr=4,id=pci.4,bus=pci.0,addr=0xb -device
+>
+> > > > > > pci-bridge,chassis_nr=5,id=pci.5,bus=pci.0,addr=0xc -device
+>
+> > > > > > piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
+>
+> > > > > > usb-ehci,id=usb1,bus=pci.0,addr=0x10 -device
+>
+> > > > > > nec-usb-xhci,id=usb2,bus=pci.0,addr=0x11 -device
+>
+> > > > > > virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x3 -device
+>
+> > > > > > virtio-scsi-pci,id=scsi1,bus=pci.0,addr=0x4 -device
+>
+> > > > > > virtio-scsi-pci,id=scsi2,bus=pci.0,addr=0x5 -device
+>
+> > > > > > virtio-scsi-pci,id=scsi3,bus=pci.0,addr=0x6 -device
+>
+> > > > > > virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x7 -drive
+>
+> > > > > > file=/mnt/sdb/xml/centos_74_x64_uefi.raw,format=raw,if=none,id
+>
+> > > > > > =dri
+>
+> > > > > > ve-v
+>
+> > > > > > irtio-disk0,cache=none -device
+>
+> > > > > > virtio-blk-pci,scsi=off,bus=pci.0,addr=0x2,drive=drive-virtio-
+>
+> > > > > > disk
+>
+> > > > > > 0,id
+>
+> > > > > > =virtio-disk0,bootindex=1 -drive
+>
+> > > > > > if=none,id=drive-ide0-1-1,readonly=on,cache=none -device
+>
+> > > > > > ide-cd,bus=ide.1,unit=1,drive=drive-ide0-1-1,id=ide0-1-1
+>
+> > > > > > -netdev
+>
+> > > > > > tap,fd=35,id=hostnet0 -device
+>
+> > > > > > virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:89:5d:8b,b
+>
+> > > > > > us=p
+>
+> > > > > > ci.4
+>
+> > > > > > ,addr=0x1 -chardev pty,id=charserial0 -device
+>
+> > > > > > isa-serial,chardev=charserial0,id=serial0 -device
+>
+> > > > > > usb-tablet,id=input0,bus=usb.0,port=1 -vnc 0.0.0.0:0 -device
+>
+> > > > > > cirrus-vga,id=video0,vgamem_mb=8,bus=pci.0,addr=0x12 -device
+>
+> > > > > > virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xd -msg
+>
+> > > > > > timestamp=on
+>
+> > > > > >
+>
+> > > > > > I am also very curious about this issue, in the linux kernel
+>
+> > > > > > code, maybe double
+>
+> > > > > check in function pci_bridge_check_ranges triggered this problem.
+>
+> > > > >
+>
+> > > > > If you can get the stacktrace in Linux when it tries to write
+>
+> > > > > this fffff value, that would be quite helpful.
+>
+> > > > >
+>
+> > > >
+>
+> > > > After I add mdelay(100) in function pci_bridge_check_ranges, this
+>
+> > > > phenomenon is easier to reproduce, below is my modify in kernel:
+>
+> > > > diff --git a/drivers/pci/setup-bus.c b/drivers/pci/setup-bus.c
+>
+> > > > index cb389277..86e232d 100644
+>
+> > > > --- a/drivers/pci/setup-bus.c
+>
+> > > > +++ b/drivers/pci/setup-bus.c
+>
+> > > > @@ -27,7 +27,7 @@
+>
+> > > >  #include <linux/slab.h>
+>
+> > > >  #include <linux/acpi.h>
+>
+> > > >  #include "pci.h"
+>
+> > > > -
+>
+> > > > +#include <linux/delay.h>
+>
+> > > >  unsigned int pci_flags;
+>
+> > > >
+>
+> > > >  struct pci_dev_resource {
+>
+> > > > @@ -787,6 +787,9 @@ static void pci_bridge_check_ranges(struct
+>
+> > > > pci_bus
+>
+> > > *bus)
+>
+> > > >                 pci_write_config_dword(bridge, PCI_PREF_BASE_UPPER32,
+>
+> > > >                                                0xffffffff);
+>
+> > > >                 pci_read_config_dword(bridge,
+>
+> > > > PCI_PREF_BASE_UPPER32, &tmp);
+>
+> > > > +               mdelay(100);
+>
+> > > > +               printk(KERN_ERR "sleep\n");
+>
+> > > > +                dump_stack();
+>
+> > > >                 if (!tmp)
+>
+> > > >                         b_res[2].flags &= ~IORESOURCE_MEM_64;
+>
+> > > >                 pci_write_config_dword(bridge,
+>
+> > > > PCI_PREF_BASE_UPPER32,
+>
+> > > >
+>
+> > >
+>
+> > > OK!
+>
+> > > I just sent a Linux patch that should help.
+>
+> > > I would appreciate it if you will give it a try and if that helps
+>
+> > > reply to it with a
+>
+> > > Tested-by: tag.
+>
+> > >
+>
+> >
+>
+> > I tested this patch and it works fine on my machine.
+>
+> >
+>
+> > But I have another question, if we only fix this problem in the
+>
+> > kernel, the Linux version that has been released does not work well on the
+>
+> virtualization platform.
+>
+> > Is there a way to fix this problem in the backend?
+>
+>
+>
+> There could we a way to work around this.
+>
+> Does below help?
+>
+>
+I am sorry to tell you, I tested this patch and it doesn't work fine.
+>
+>
+>
+>
+> diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c index
+>
+> 236a20eaa8..7834cac4b0 100644
+>
+> --- a/hw/i386/acpi-build.c
+>
+> +++ b/hw/i386/acpi-build.c
+>
+> @@ -551,7 +551,7 @@ static void build_append_pci_bus_devices(Aml
+>
+> *parent_scope, PCIBus *bus,
+>
+>
+>
+>          aml_append(method, aml_store(aml_int(bsel_val), aml_name("BNUM")));
+>
+>          aml_append(method,
+>
+> -            aml_call2("DVNT", aml_name("PCIU"), aml_int(1) /* Device Check
+>
+> */)
+>
+> +            aml_call2("DVNT", aml_name("PCIU"), aml_int(4) /* Device
+>
+> + Check Light */)
+>
+>          );
+>
+>          aml_append(method,
+>
+>              aml_call2("DVNT", aml_name("PCID"), aml_int(3)/* Eject Request
+>
+> */)
+Oh I see, another bug:
+
+        case ACPI_NOTIFY_DEVICE_CHECK_LIGHT:
+                acpi_handle_debug(handle, "ACPI_NOTIFY_DEVICE_CHECK_LIGHT 
+event\n");
+                /* TBD: Exactly what does 'light' mean? */
+                break;
+
+And then e.g. acpi_generic_hotplug_event(struct acpi_device *adev, u32 type)
+and friends all just ignore this event type.
+
+
+
+-- 
+MST
+
+>
+> > > > > > > > > Hi all,
+>
+> > > > > > > > >
+>
+> > > > > > > > >
+>
+> > > > > > > > >
+>
+> > > > > > > > > In our test, we configured VM with several pci-bridges
+>
+> > > > > > > > > and a virtio-net nic been attached with bus 4,
+>
+> > > > > > > > >
+>
+> > > > > > > > > After VM is startup, We ping this nic from host to
+>
+> > > > > > > > > judge if it is working normally. Then, we hot add pci
+>
+> > > > > > > > > devices to this VM with bus
+>
+> > > > 0.
+>
+> > > > > > > > >
+>
+> > > > > > > > > We  found the virtio-net NIC in bus 4 is not working
+>
+> > > > > > > > > (can not
+>
+> > > > > > > > > connect) occasionally, as it kick virtio backend
+>
+> > > > > > > > > failure with error
+>
+> > > But I have another question, if we only fix this problem in the
+>
+> > > kernel, the Linux version that has been released does not work
+>
+> > > well on the
+>
+> > virtualization platform.
+>
+> > > Is there a way to fix this problem in the backend?
+>
+> >
+>
+> > There could we a way to work around this.
+>
+> > Does below help?
+>
+>
+>
+> I am sorry to tell you, I tested this patch and it doesn't work fine.
+>
+>
+>
+> >
+>
+> > diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c index
+>
+> > 236a20eaa8..7834cac4b0 100644
+>
+> > --- a/hw/i386/acpi-build.c
+>
+> > +++ b/hw/i386/acpi-build.c
+>
+> > @@ -551,7 +551,7 @@ static void build_append_pci_bus_devices(Aml
+>
+> > *parent_scope, PCIBus *bus,
+>
+> >
+>
+> >          aml_append(method, aml_store(aml_int(bsel_val),
+>
+aml_name("BNUM")));
+>
+> >          aml_append(method,
+>
+> > -            aml_call2("DVNT", aml_name("PCIU"), aml_int(1) /* Device
+>
+> > Check
+>
+*/)
+>
+> > +            aml_call2("DVNT", aml_name("PCIU"), aml_int(4) /*
+>
+> > + Device Check Light */)
+>
+> >          );
+>
+> >          aml_append(method,
+>
+> >              aml_call2("DVNT", aml_name("PCID"), aml_int(3)/* Eject
+>
+> > Request */)
+>
+>
+>
+Oh I see, another bug:
+>
+>
+case ACPI_NOTIFY_DEVICE_CHECK_LIGHT:
+>
+acpi_handle_debug(handle, "ACPI_NOTIFY_DEVICE_CHECK_LIGHT
+>
+event\n");
+>
+/* TBD: Exactly what does 'light' mean? */
+>
+break;
+>
+>
+And then e.g. acpi_generic_hotplug_event(struct acpi_device *adev, u32 type)
+>
+and friends all just ignore this event type.
+>
+>
+>
+>
+--
+>
+MST
+Hi Michael,
+
+If we want to fix this problem on the backend, it is not enough to consider 
+only PCI
+device hot plugging, because I found that if we use a command like
+"echo 1 > /sys/bus/pci/rescan" in guest, this problem is very easy to reproduce.
+
+From the perspective of device emulation, when guest writes 0xffffffff to the 
+BAR,
+guest just want to get the size of the region but not really updating the 
+address space.
+So I made the following patch to avoid  update pci mapping.
+
+Do you think this make sense?
+
+[PATCH] pci: avoid update pci mapping when writing 0xFFFF FFFF to BAR
+
+When guest writes 0xffffffff to the BAR, guest just want to get the size of the 
+region
+but not really updating the address space.
+So when guest writes 0xffffffff to BAR, we need avoid pci_update_mappings 
+or pci_bridge_update_mappings.
+
+Signed-off-by: xuyandong <address@hidden>
+---
+ hw/pci/pci.c        | 6 ++++--
+ hw/pci/pci_bridge.c | 8 +++++---
+ 2 files changed, 9 insertions(+), 5 deletions(-)
+
+diff --git a/hw/pci/pci.c b/hw/pci/pci.c
+index 56b13b3..ef368e1 100644
+--- a/hw/pci/pci.c
++++ b/hw/pci/pci.c
+@@ -1361,6 +1361,7 @@ void pci_default_write_config(PCIDevice *d, uint32_t 
+addr, uint32_t val_in, int
+ {
+     int i, was_irq_disabled = pci_irq_disabled(d);
+     uint32_t val = val_in;
++    uint64_t barmask = (1 << l*8) - 1;
+ 
+     for (i = 0; i < l; val >>= 8, ++i) {
+         uint8_t wmask = d->wmask[addr + i];
+@@ -1369,9 +1370,10 @@ void pci_default_write_config(PCIDevice *d, uint32_t 
+addr, uint32_t val_in, int
+         d->config[addr + i] = (d->config[addr + i] & ~wmask) | (val & wmask);
+         d->config[addr + i] &= ~(val & w1cmask); /* W1C: Write 1 to Clear */
+     }
+-    if (ranges_overlap(addr, l, PCI_BASE_ADDRESS_0, 24) ||
++    if ((val_in != barmask &&
++       (ranges_overlap(addr, l, PCI_BASE_ADDRESS_0, 24) ||
+         ranges_overlap(addr, l, PCI_ROM_ADDRESS, 4) ||
+-        ranges_overlap(addr, l, PCI_ROM_ADDRESS1, 4) ||
++        ranges_overlap(addr, l, PCI_ROM_ADDRESS1, 4))) ||
+         range_covers_byte(addr, l, PCI_COMMAND))
+         pci_update_mappings(d);
+ 
+diff --git a/hw/pci/pci_bridge.c b/hw/pci/pci_bridge.c
+index ee9dff2..f2bad79 100644
+--- a/hw/pci/pci_bridge.c
++++ b/hw/pci/pci_bridge.c
+@@ -253,17 +253,19 @@ void pci_bridge_write_config(PCIDevice *d,
+     PCIBridge *s = PCI_BRIDGE(d);
+     uint16_t oldctl = pci_get_word(d->config + PCI_BRIDGE_CONTROL);
+     uint16_t newctl;
++    uint64_t barmask = (1 << len * 8) - 1;
+ 
+     pci_default_write_config(d, address, val, len);
+ 
+     if (ranges_overlap(address, len, PCI_COMMAND, 2) ||
+ 
+-        /* io base/limit */
+-        ranges_overlap(address, len, PCI_IO_BASE, 2) ||
++        (val != barmask &&
++       /* io base/limit */
++        (ranges_overlap(address, len, PCI_IO_BASE, 2) ||
+ 
+         /* memory base/limit, prefetchable base/limit and
+            io base/limit upper 16 */
+-        ranges_overlap(address, len, PCI_MEMORY_BASE, 20) ||
++        ranges_overlap(address, len, PCI_MEMORY_BASE, 20))) ||
+ 
+         /* vga enable */
+         ranges_overlap(address, len, PCI_BRIDGE_CONTROL, 2)) {
+-- 
+1.8.3.1
+
+On Mon, Jan 07, 2019 at 02:37:17PM +0000, xuyandong wrote:
+>
+> > > > > > > > > > Hi all,
+>
+> > > > > > > > > >
+>
+> > > > > > > > > >
+>
+> > > > > > > > > >
+>
+> > > > > > > > > > In our test, we configured VM with several pci-bridges
+>
+> > > > > > > > > > and a virtio-net nic been attached with bus 4,
+>
+> > > > > > > > > >
+>
+> > > > > > > > > > After VM is startup, We ping this nic from host to
+>
+> > > > > > > > > > judge if it is working normally. Then, we hot add pci
+>
+> > > > > > > > > > devices to this VM with bus
+>
+> > > > > 0.
+>
+> > > > > > > > > >
+>
+> > > > > > > > > > We  found the virtio-net NIC in bus 4 is not working
+>
+> > > > > > > > > > (can not
+>
+> > > > > > > > > > connect) occasionally, as it kick virtio backend
+>
+> > > > > > > > > > failure with error
+>
+>
+> > > > But I have another question, if we only fix this problem in the
+>
+> > > > kernel, the Linux version that has been released does not work
+>
+> > > > well on the
+>
+> > > virtualization platform.
+>
+> > > > Is there a way to fix this problem in the backend?
+>
+> > >
+>
+> > > There could we a way to work around this.
+>
+> > > Does below help?
+>
+> >
+>
+> > I am sorry to tell you, I tested this patch and it doesn't work fine.
+>
+> >
+>
+> > >
+>
+> > > diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c index
+>
+> > > 236a20eaa8..7834cac4b0 100644
+>
+> > > --- a/hw/i386/acpi-build.c
+>
+> > > +++ b/hw/i386/acpi-build.c
+>
+> > > @@ -551,7 +551,7 @@ static void build_append_pci_bus_devices(Aml
+>
+> > > *parent_scope, PCIBus *bus,
+>
+> > >
+>
+> > >          aml_append(method, aml_store(aml_int(bsel_val),
+>
+> aml_name("BNUM")));
+>
+> > >          aml_append(method,
+>
+> > > -            aml_call2("DVNT", aml_name("PCIU"), aml_int(1) /* Device
+>
+> > > Check
+>
+> */)
+>
+> > > +            aml_call2("DVNT", aml_name("PCIU"), aml_int(4) /*
+>
+> > > + Device Check Light */)
+>
+> > >          );
+>
+> > >          aml_append(method,
+>
+> > >              aml_call2("DVNT", aml_name("PCID"), aml_int(3)/* Eject
+>
+> > > Request */)
+>
+>
+>
+>
+>
+> Oh I see, another bug:
+>
+>
+>
+>         case ACPI_NOTIFY_DEVICE_CHECK_LIGHT:
+>
+>                 acpi_handle_debug(handle, "ACPI_NOTIFY_DEVICE_CHECK_LIGHT
+>
+> event\n");
+>
+>                 /* TBD: Exactly what does 'light' mean? */
+>
+>                 break;
+>
+>
+>
+> And then e.g. acpi_generic_hotplug_event(struct acpi_device *adev, u32 type)
+>
+> and friends all just ignore this event type.
+>
+>
+>
+>
+>
+>
+>
+> --
+>
+> MST
+>
+>
+Hi Michael,
+>
+>
+If we want to fix this problem on the backend, it is not enough to consider
+>
+only PCI
+>
+device hot plugging, because I found that if we use a command like
+>
+"echo 1 > /sys/bus/pci/rescan" in guest, this problem is very easy to
+>
+reproduce.
+>
+>
+From the perspective of device emulation, when guest writes 0xffffffff to the
+>
+BAR,
+>
+guest just want to get the size of the region but not really updating the
+>
+address space.
+>
+So I made the following patch to avoid  update pci mapping.
+>
+>
+Do you think this make sense?
+>
+>
+[PATCH] pci: avoid update pci mapping when writing 0xFFFF FFFF to BAR
+>
+>
+When guest writes 0xffffffff to the BAR, guest just want to get the size of
+>
+the region
+>
+but not really updating the address space.
+>
+So when guest writes 0xffffffff to BAR, we need avoid pci_update_mappings
+>
+or pci_bridge_update_mappings.
+>
+>
+Signed-off-by: xuyandong <address@hidden>
+I see how that will address the common case however there are a bunch of
+issues here.  First of all it's easy to trigger the update by some other
+action like VM migration.  More importantly it's just possible that
+guest actually does want to set the low 32 bit of the address to all
+ones.  For example, that is clearly listed as a way to disable all
+devices behind the bridge in the pci to pci bridge spec.
+
+Given upstream is dragging it's feet I'm open to adding a flag
+that will help keep guests going as a temporary measure.
+We will need to think about ways to restrict this as much as
+we can.
+
+
+>
+---
+>
+hw/pci/pci.c        | 6 ++++--
+>
+hw/pci/pci_bridge.c | 8 +++++---
+>
+2 files changed, 9 insertions(+), 5 deletions(-)
+>
+>
+diff --git a/hw/pci/pci.c b/hw/pci/pci.c
+>
+index 56b13b3..ef368e1 100644
+>
+--- a/hw/pci/pci.c
+>
++++ b/hw/pci/pci.c
+>
+@@ -1361,6 +1361,7 @@ void pci_default_write_config(PCIDevice *d, uint32_t
+>
+addr, uint32_t val_in, int
+>
+{
+>
+int i, was_irq_disabled = pci_irq_disabled(d);
+>
+uint32_t val = val_in;
+>
++    uint64_t barmask = (1 << l*8) - 1;
+>
+>
+for (i = 0; i < l; val >>= 8, ++i) {
+>
+uint8_t wmask = d->wmask[addr + i];
+>
+@@ -1369,9 +1370,10 @@ void pci_default_write_config(PCIDevice *d, uint32_t
+>
+addr, uint32_t val_in, int
+>
+d->config[addr + i] = (d->config[addr + i] & ~wmask) | (val & wmask);
+>
+d->config[addr + i] &= ~(val & w1cmask); /* W1C: Write 1 to Clear */
+>
+}
+>
+-    if (ranges_overlap(addr, l, PCI_BASE_ADDRESS_0, 24) ||
+>
++    if ((val_in != barmask &&
+>
++     (ranges_overlap(addr, l, PCI_BASE_ADDRESS_0, 24) ||
+>
+ranges_overlap(addr, l, PCI_ROM_ADDRESS, 4) ||
+>
+-        ranges_overlap(addr, l, PCI_ROM_ADDRESS1, 4) ||
+>
++        ranges_overlap(addr, l, PCI_ROM_ADDRESS1, 4))) ||
+>
+range_covers_byte(addr, l, PCI_COMMAND))
+>
+pci_update_mappings(d);
+>
+>
+diff --git a/hw/pci/pci_bridge.c b/hw/pci/pci_bridge.c
+>
+index ee9dff2..f2bad79 100644
+>
+--- a/hw/pci/pci_bridge.c
+>
++++ b/hw/pci/pci_bridge.c
+>
+@@ -253,17 +253,19 @@ void pci_bridge_write_config(PCIDevice *d,
+>
+PCIBridge *s = PCI_BRIDGE(d);
+>
+uint16_t oldctl = pci_get_word(d->config + PCI_BRIDGE_CONTROL);
+>
+uint16_t newctl;
+>
++    uint64_t barmask = (1 << len * 8) - 1;
+>
+>
+pci_default_write_config(d, address, val, len);
+>
+>
+if (ranges_overlap(address, len, PCI_COMMAND, 2) ||
+>
+>
+-        /* io base/limit */
+>
+-        ranges_overlap(address, len, PCI_IO_BASE, 2) ||
+>
++        (val != barmask &&
+>
++     /* io base/limit */
+>
++        (ranges_overlap(address, len, PCI_IO_BASE, 2) ||
+>
+>
+/* memory base/limit, prefetchable base/limit and
+>
+io base/limit upper 16 */
+>
+-        ranges_overlap(address, len, PCI_MEMORY_BASE, 20) ||
+>
++        ranges_overlap(address, len, PCI_MEMORY_BASE, 20))) ||
+>
+>
+/* vga enable */
+>
+ranges_overlap(address, len, PCI_BRIDGE_CONTROL, 2)) {
+>
+--
+>
+1.8.3.1
+>
+>
+>
+
+>
+-----Original Message-----
+>
+From: Michael S. Tsirkin [
+mailto:address@hidden
+>
+Sent: Monday, January 07, 2019 11:06 PM
+>
+To: xuyandong <address@hidden>
+>
+Cc: address@hidden; Paolo Bonzini <address@hidden>; qemu-
+>
+address@hidden; Zhanghailiang <address@hidden>;
+>
+wangxin (U) <address@hidden>; Huangweidong (C)
+>
+<address@hidden>
+>
+Subject: Re: [BUG]Unassigned mem write during pci device hot-plug
+>
+>
+On Mon, Jan 07, 2019 at 02:37:17PM +0000, xuyandong wrote:
+>
+> > > > > > > > > > > Hi all,
+>
+> > > > > > > > > > >
+>
+> > > > > > > > > > >
+>
+> > > > > > > > > > >
+>
+> > > > > > > > > > > In our test, we configured VM with several
+>
+> > > > > > > > > > > pci-bridges and a virtio-net nic been attached
+>
+> > > > > > > > > > > with bus 4,
+>
+> > > > > > > > > > >
+>
+> > > > > > > > > > > After VM is startup, We ping this nic from host to
+>
+> > > > > > > > > > > judge if it is working normally. Then, we hot add
+>
+> > > > > > > > > > > pci devices to this VM with bus
+>
+> > > > > > 0.
+>
+> > > > > > > > > > >
+>
+> > > > > > > > > > > We  found the virtio-net NIC in bus 4 is not
+>
+> > > > > > > > > > > working (can not
+>
+> > > > > > > > > > > connect) occasionally, as it kick virtio backend
+>
+> > > > > > > > > > > failure with error
+>
+>
+>
+> > > > > But I have another question, if we only fix this problem in
+>
+> > > > > the kernel, the Linux version that has been released does not
+>
+> > > > > work well on the
+>
+> > > > virtualization platform.
+>
+> > > > > Is there a way to fix this problem in the backend?
+>
+>
+>
+> Hi Michael,
+>
+>
+>
+> If we want to fix this problem on the backend, it is not enough to
+>
+> consider only PCI device hot plugging, because I found that if we use
+>
+> a command like "echo 1 > /sys/bus/pci/rescan" in guest, this problem is very
+>
+easy to reproduce.
+>
+>
+>
+> From the perspective of device emulation, when guest writes 0xffffffff
+>
+> to the BAR, guest just want to get the size of the region but not really
+>
+updating the address space.
+>
+> So I made the following patch to avoid  update pci mapping.
+>
+>
+>
+> Do you think this make sense?
+>
+>
+>
+> [PATCH] pci: avoid update pci mapping when writing 0xFFFF FFFF to BAR
+>
+>
+>
+> When guest writes 0xffffffff to the BAR, guest just want to get the
+>
+> size of the region but not really updating the address space.
+>
+> So when guest writes 0xffffffff to BAR, we need avoid
+>
+> pci_update_mappings or pci_bridge_update_mappings.
+>
+>
+>
+> Signed-off-by: xuyandong <address@hidden>
+>
+>
+I see how that will address the common case however there are a bunch of
+>
+issues here.  First of all it's easy to trigger the update by some other
+>
+action like
+>
+VM migration.  More importantly it's just possible that guest actually does
+>
+want
+>
+to set the low 32 bit of the address to all ones.  For example, that is
+>
+clearly
+>
+listed as a way to disable all devices behind the bridge in the pci to pci
+>
+bridge
+>
+spec.
+Ok, I see. If I only skip upate when guest writing 0xFFFFFFFF to Prefetcable 
+Base Upper 32 Bits
+to meet the kernel double check problem.
+Do you think there is still risk?
+
+>
+>
+Given upstream is dragging it's feet I'm open to adding a flag that will help
+>
+keep guests going as a temporary measure.
+>
+We will need to think about ways to restrict this as much as we can.
+>
+>
+>
+> ---
+>
+>  hw/pci/pci.c        | 6 ++++--
+>
+>  hw/pci/pci_bridge.c | 8 +++++---
+>
+>  2 files changed, 9 insertions(+), 5 deletions(-)
+>
+>
+>
+> diff --git a/hw/pci/pci.c b/hw/pci/pci.c index 56b13b3..ef368e1 100644
+>
+> --- a/hw/pci/pci.c
+>
+> +++ b/hw/pci/pci.c
+>
+> @@ -1361,6 +1361,7 @@ void pci_default_write_config(PCIDevice *d,
+>
+> uint32_t addr, uint32_t val_in, int  {
+>
+>      int i, was_irq_disabled = pci_irq_disabled(d);
+>
+>      uint32_t val = val_in;
+>
+> +    uint64_t barmask = (1 << l*8) - 1;
+>
+>
+>
+>      for (i = 0; i < l; val >>= 8, ++i) {
+>
+>          uint8_t wmask = d->wmask[addr + i]; @@ -1369,9 +1370,10 @@
+>
+> void pci_default_write_config(PCIDevice *d, uint32_t addr, uint32_t val_in,
+>
+int
+>
+>          d->config[addr + i] = (d->config[addr + i] & ~wmask) | (val &
+>
+> wmask);
+>
+>          d->config[addr + i] &= ~(val & w1cmask); /* W1C: Write 1 to Clear
+>
+> */
+>
+>      }
+>
+> -    if (ranges_overlap(addr, l, PCI_BASE_ADDRESS_0, 24) ||
+>
+> +    if ((val_in != barmask &&
+>
+> +   (ranges_overlap(addr, l, PCI_BASE_ADDRESS_0, 24) ||
+>
+>          ranges_overlap(addr, l, PCI_ROM_ADDRESS, 4) ||
+>
+> -        ranges_overlap(addr, l, PCI_ROM_ADDRESS1, 4) ||
+>
+> +        ranges_overlap(addr, l, PCI_ROM_ADDRESS1, 4))) ||
+>
+>          range_covers_byte(addr, l, PCI_COMMAND))
+>
+>          pci_update_mappings(d);
+>
+>
+>
+> diff --git a/hw/pci/pci_bridge.c b/hw/pci/pci_bridge.c index
+>
+> ee9dff2..f2bad79 100644
+>
+> --- a/hw/pci/pci_bridge.c
+>
+> +++ b/hw/pci/pci_bridge.c
+>
+> @@ -253,17 +253,19 @@ void pci_bridge_write_config(PCIDevice *d,
+>
+>      PCIBridge *s = PCI_BRIDGE(d);
+>
+>      uint16_t oldctl = pci_get_word(d->config + PCI_BRIDGE_CONTROL);
+>
+>      uint16_t newctl;
+>
+> +    uint64_t barmask = (1 << len * 8) - 1;
+>
+>
+>
+>      pci_default_write_config(d, address, val, len);
+>
+>
+>
+>      if (ranges_overlap(address, len, PCI_COMMAND, 2) ||
+>
+>
+>
+> -        /* io base/limit */
+>
+> -        ranges_overlap(address, len, PCI_IO_BASE, 2) ||
+>
+> +        (val != barmask &&
+>
+> +   /* io base/limit */
+>
+> +        (ranges_overlap(address, len, PCI_IO_BASE, 2) ||
+>
+>
+>
+>          /* memory base/limit, prefetchable base/limit and
+>
+>             io base/limit upper 16 */
+>
+> -        ranges_overlap(address, len, PCI_MEMORY_BASE, 20) ||
+>
+> +        ranges_overlap(address, len, PCI_MEMORY_BASE, 20))) ||
+>
+>
+>
+>          /* vga enable */
+>
+>          ranges_overlap(address, len, PCI_BRIDGE_CONTROL, 2)) {
+>
+> --
+>
+> 1.8.3.1
+>
+>
+>
+>
+>
+>
+
+On Mon, Jan 07, 2019 at 03:28:36PM +0000, xuyandong wrote:
+>
+>
+>
+> -----Original Message-----
+>
+> From: Michael S. Tsirkin [
+mailto:address@hidden
+>
+> Sent: Monday, January 07, 2019 11:06 PM
+>
+> To: xuyandong <address@hidden>
+>
+> Cc: address@hidden; Paolo Bonzini <address@hidden>; qemu-
+>
+> address@hidden; Zhanghailiang <address@hidden>;
+>
+> wangxin (U) <address@hidden>; Huangweidong (C)
+>
+> <address@hidden>
+>
+> Subject: Re: [BUG]Unassigned mem write during pci device hot-plug
+>
+>
+>
+> On Mon, Jan 07, 2019 at 02:37:17PM +0000, xuyandong wrote:
+>
+> > > > > > > > > > > > Hi all,
+>
+> > > > > > > > > > > >
+>
+> > > > > > > > > > > >
+>
+> > > > > > > > > > > >
+>
+> > > > > > > > > > > > In our test, we configured VM with several
+>
+> > > > > > > > > > > > pci-bridges and a virtio-net nic been attached
+>
+> > > > > > > > > > > > with bus 4,
+>
+> > > > > > > > > > > >
+>
+> > > > > > > > > > > > After VM is startup, We ping this nic from host to
+>
+> > > > > > > > > > > > judge if it is working normally. Then, we hot add
+>
+> > > > > > > > > > > > pci devices to this VM with bus
+>
+> > > > > > > 0.
+>
+> > > > > > > > > > > >
+>
+> > > > > > > > > > > > We  found the virtio-net NIC in bus 4 is not
+>
+> > > > > > > > > > > > working (can not
+>
+> > > > > > > > > > > > connect) occasionally, as it kick virtio backend
+>
+> > > > > > > > > > > > failure with error
+>
+> >
+>
+> > > > > > But I have another question, if we only fix this problem in
+>
+> > > > > > the kernel, the Linux version that has been released does not
+>
+> > > > > > work well on the
+>
+> > > > > virtualization platform.
+>
+> > > > > > Is there a way to fix this problem in the backend?
+>
+> >
+>
+> > Hi Michael,
+>
+> >
+>
+> > If we want to fix this problem on the backend, it is not enough to
+>
+> > consider only PCI device hot plugging, because I found that if we use
+>
+> > a command like "echo 1 > /sys/bus/pci/rescan" in guest, this problem is
+>
+> > very
+>
+> easy to reproduce.
+>
+> >
+>
+> > From the perspective of device emulation, when guest writes 0xffffffff
+>
+> > to the BAR, guest just want to get the size of the region but not really
+>
+> updating the address space.
+>
+> > So I made the following patch to avoid  update pci mapping.
+>
+> >
+>
+> > Do you think this make sense?
+>
+> >
+>
+> > [PATCH] pci: avoid update pci mapping when writing 0xFFFF FFFF to BAR
+>
+> >
+>
+> > When guest writes 0xffffffff to the BAR, guest just want to get the
+>
+> > size of the region but not really updating the address space.
+>
+> > So when guest writes 0xffffffff to BAR, we need avoid
+>
+> > pci_update_mappings or pci_bridge_update_mappings.
+>
+> >
+>
+> > Signed-off-by: xuyandong <address@hidden>
+>
+>
+>
+> I see how that will address the common case however there are a bunch of
+>
+> issues here.  First of all it's easy to trigger the update by some other
+>
+> action like
+>
+> VM migration.  More importantly it's just possible that guest actually does
+>
+> want
+>
+> to set the low 32 bit of the address to all ones.  For example, that is
+>
+> clearly
+>
+> listed as a way to disable all devices behind the bridge in the pci to pci
+>
+> bridge
+>
+> spec.
+>
+>
+Ok, I see. If I only skip upate when guest writing 0xFFFFFFFF to Prefetcable
+>
+Base Upper 32 Bits
+>
+to meet the kernel double check problem.
+>
+Do you think there is still risk?
+Well it's non zero since spec says such a write should disable all
+accesses. Just an idea: why not add an option to disable upper 32 bit?
+That is ugly and limits space but spec compliant.
+
+>
+>
+>
+> Given upstream is dragging it's feet I'm open to adding a flag that will
+>
+> help
+>
+> keep guests going as a temporary measure.
+>
+> We will need to think about ways to restrict this as much as we can.
+>
+>
+>
+>
+>
+> > ---
+>
+> >  hw/pci/pci.c        | 6 ++++--
+>
+> >  hw/pci/pci_bridge.c | 8 +++++---
+>
+> >  2 files changed, 9 insertions(+), 5 deletions(-)
+>
+> >
+>
+> > diff --git a/hw/pci/pci.c b/hw/pci/pci.c index 56b13b3..ef368e1 100644
+>
+> > --- a/hw/pci/pci.c
+>
+> > +++ b/hw/pci/pci.c
+>
+> > @@ -1361,6 +1361,7 @@ void pci_default_write_config(PCIDevice *d,
+>
+> > uint32_t addr, uint32_t val_in, int  {
+>
+> >      int i, was_irq_disabled = pci_irq_disabled(d);
+>
+> >      uint32_t val = val_in;
+>
+> > +    uint64_t barmask = (1 << l*8) - 1;
+>
+> >
+>
+> >      for (i = 0; i < l; val >>= 8, ++i) {
+>
+> >          uint8_t wmask = d->wmask[addr + i]; @@ -1369,9 +1370,10 @@
+>
+> > void pci_default_write_config(PCIDevice *d, uint32_t addr, uint32_t
+>
+> > val_in,
+>
+> int
+>
+> >          d->config[addr + i] = (d->config[addr + i] & ~wmask) | (val &
+>
+> > wmask);
+>
+> >          d->config[addr + i] &= ~(val & w1cmask); /* W1C: Write 1 to
+>
+> > Clear */
+>
+> >      }
+>
+> > -    if (ranges_overlap(addr, l, PCI_BASE_ADDRESS_0, 24) ||
+>
+> > +    if ((val_in != barmask &&
+>
+> > + (ranges_overlap(addr, l, PCI_BASE_ADDRESS_0, 24) ||
+>
+> >          ranges_overlap(addr, l, PCI_ROM_ADDRESS, 4) ||
+>
+> > -        ranges_overlap(addr, l, PCI_ROM_ADDRESS1, 4) ||
+>
+> > +        ranges_overlap(addr, l, PCI_ROM_ADDRESS1, 4))) ||
+>
+> >          range_covers_byte(addr, l, PCI_COMMAND))
+>
+> >          pci_update_mappings(d);
+>
+> >
+>
+> > diff --git a/hw/pci/pci_bridge.c b/hw/pci/pci_bridge.c index
+>
+> > ee9dff2..f2bad79 100644
+>
+> > --- a/hw/pci/pci_bridge.c
+>
+> > +++ b/hw/pci/pci_bridge.c
+>
+> > @@ -253,17 +253,19 @@ void pci_bridge_write_config(PCIDevice *d,
+>
+> >      PCIBridge *s = PCI_BRIDGE(d);
+>
+> >      uint16_t oldctl = pci_get_word(d->config + PCI_BRIDGE_CONTROL);
+>
+> >      uint16_t newctl;
+>
+> > +    uint64_t barmask = (1 << len * 8) - 1;
+>
+> >
+>
+> >      pci_default_write_config(d, address, val, len);
+>
+> >
+>
+> >      if (ranges_overlap(address, len, PCI_COMMAND, 2) ||
+>
+> >
+>
+> > -        /* io base/limit */
+>
+> > -        ranges_overlap(address, len, PCI_IO_BASE, 2) ||
+>
+> > +        (val != barmask &&
+>
+> > + /* io base/limit */
+>
+> > +        (ranges_overlap(address, len, PCI_IO_BASE, 2) ||
+>
+> >
+>
+> >          /* memory base/limit, prefetchable base/limit and
+>
+> >             io base/limit upper 16 */
+>
+> > -        ranges_overlap(address, len, PCI_MEMORY_BASE, 20) ||
+>
+> > +        ranges_overlap(address, len, PCI_MEMORY_BASE, 20))) ||
+>
+> >
+>
+> >          /* vga enable */
+>
+> >          ranges_overlap(address, len, PCI_BRIDGE_CONTROL, 2)) {
+>
+> > --
+>
+> > 1.8.3.1
+>
+> >
+>
+> >
+>
+> >
+
+>
+-----Original Message-----
+>
+From: xuyandong
+>
+Sent: Monday, January 07, 2019 10:37 PM
+>
+To: 'Michael S. Tsirkin' <address@hidden>
+>
+Cc: address@hidden; Paolo Bonzini <address@hidden>; qemu-
+>
+address@hidden; Zhanghailiang <address@hidden>;
+>
+wangxin (U) <address@hidden>; Huangweidong (C)
+>
+<address@hidden>
+>
+Subject: RE: [BUG]Unassigned mem write during pci device hot-plug
+>
+>
+> > > > > > > > > > Hi all,
+>
+> > > > > > > > > >
+>
+> > > > > > > > > >
+>
+> > > > > > > > > >
+>
+> > > > > > > > > > In our test, we configured VM with several
+>
+> > > > > > > > > > pci-bridges and a virtio-net nic been attached with
+>
+> > > > > > > > > > bus 4,
+>
+> > > > > > > > > >
+>
+> > > > > > > > > > After VM is startup, We ping this nic from host to
+>
+> > > > > > > > > > judge if it is working normally. Then, we hot add
+>
+> > > > > > > > > > pci devices to this VM with bus
+>
+> > > > > 0.
+>
+> > > > > > > > > >
+>
+> > > > > > > > > > We  found the virtio-net NIC in bus 4 is not working
+>
+> > > > > > > > > > (can not
+>
+> > > > > > > > > > connect) occasionally, as it kick virtio backend
+>
+> > > > > > > > > > failure with error
+>
+>
+> > > > But I have another question, if we only fix this problem in the
+>
+> > > > kernel, the Linux version that has been released does not work
+>
+> > > > well on the
+>
+> > > virtualization platform.
+>
+> > > > Is there a way to fix this problem in the backend?
+>
+> > >
+>
+> > > There could we a way to work around this.
+>
+> > > Does below help?
+>
+> >
+>
+> > I am sorry to tell you, I tested this patch and it doesn't work fine.
+>
+> >
+>
+> > >
+>
+> > > diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c index
+>
+> > > 236a20eaa8..7834cac4b0 100644
+>
+> > > --- a/hw/i386/acpi-build.c
+>
+> > > +++ b/hw/i386/acpi-build.c
+>
+> > > @@ -551,7 +551,7 @@ static void build_append_pci_bus_devices(Aml
+>
+> > > *parent_scope, PCIBus *bus,
+>
+> > >
+>
+> > >          aml_append(method, aml_store(aml_int(bsel_val),
+>
+> aml_name("BNUM")));
+>
+> > >          aml_append(method,
+>
+> > > -            aml_call2("DVNT", aml_name("PCIU"), aml_int(1) /* Device
+>
+Check
+>
+> */)
+>
+> > > +            aml_call2("DVNT", aml_name("PCIU"), aml_int(4) /*
+>
+> > > + Device Check Light */)
+>
+> > >          );
+>
+> > >          aml_append(method,
+>
+> > >              aml_call2("DVNT", aml_name("PCID"), aml_int(3)/*
+>
+> > > Eject Request */)
+>
+>
+>
+>
+>
+> Oh I see, another bug:
+>
+>
+>
+>         case ACPI_NOTIFY_DEVICE_CHECK_LIGHT:
+>
+>                 acpi_handle_debug(handle,
+>
+> "ACPI_NOTIFY_DEVICE_CHECK_LIGHT event\n");
+>
+>                 /* TBD: Exactly what does 'light' mean? */
+>
+>                 break;
+>
+>
+>
+> And then e.g. acpi_generic_hotplug_event(struct acpi_device *adev, u32
+>
+> type) and friends all just ignore this event type.
+>
+>
+>
+>
+>
+>
+>
+> --
+>
+> MST
+>
+>
+Hi Michael,
+>
+>
+If we want to fix this problem on the backend, it is not enough to consider
+>
+only
+>
+PCI device hot plugging, because I found that if we use a command like "echo
+>
+1 >
+>
+/sys/bus/pci/rescan" in guest, this problem is very easy to reproduce.
+>
+>
+From the perspective of device emulation, when guest writes 0xffffffff to the
+>
+BAR, guest just want to get the size of the region but not really updating the
+>
+address space.
+>
+So I made the following patch to avoid  update pci mapping.
+>
+>
+Do you think this make sense?
+>
+>
+[PATCH] pci: avoid update pci mapping when writing 0xFFFF FFFF to BAR
+>
+>
+When guest writes 0xffffffff to the BAR, guest just want to get the size of
+>
+the
+>
+region but not really updating the address space.
+>
+So when guest writes 0xffffffff to BAR, we need avoid pci_update_mappings or
+>
+pci_bridge_update_mappings.
+>
+>
+Signed-off-by: xuyandong <address@hidden>
+>
+---
+>
+hw/pci/pci.c        | 6 ++++--
+>
+hw/pci/pci_bridge.c | 8 +++++---
+>
+2 files changed, 9 insertions(+), 5 deletions(-)
+>
+>
+diff --git a/hw/pci/pci.c b/hw/pci/pci.c index 56b13b3..ef368e1 100644
+>
+--- a/hw/pci/pci.c
+>
++++ b/hw/pci/pci.c
+>
+@@ -1361,6 +1361,7 @@ void pci_default_write_config(PCIDevice *d, uint32_t
+>
+addr, uint32_t val_in, int  {
+>
+int i, was_irq_disabled = pci_irq_disabled(d);
+>
+uint32_t val = val_in;
+>
++    uint64_t barmask = (1 << l*8) - 1;
+>
+>
+for (i = 0; i < l; val >>= 8, ++i) {
+>
+uint8_t wmask = d->wmask[addr + i]; @@ -1369,9 +1370,10 @@ void
+>
+pci_default_write_config(PCIDevice *d, uint32_t addr, uint32_t val_in, int
+>
+d->config[addr + i] = (d->config[addr + i] & ~wmask) | (val & wmask);
+>
+d->config[addr + i] &= ~(val & w1cmask); /* W1C: Write 1 to Clear */
+>
+}
+>
+-    if (ranges_overlap(addr, l, PCI_BASE_ADDRESS_0, 24) ||
+>
++    if ((val_in != barmask &&
+>
++     (ranges_overlap(addr, l, PCI_BASE_ADDRESS_0, 24) ||
+>
+ranges_overlap(addr, l, PCI_ROM_ADDRESS, 4) ||
+>
+-        ranges_overlap(addr, l, PCI_ROM_ADDRESS1, 4) ||
+>
++        ranges_overlap(addr, l, PCI_ROM_ADDRESS1, 4))) ||
+>
+range_covers_byte(addr, l, PCI_COMMAND))
+>
+pci_update_mappings(d);
+>
+>
+diff --git a/hw/pci/pci_bridge.c b/hw/pci/pci_bridge.c index ee9dff2..f2bad79
+>
+100644
+>
+--- a/hw/pci/pci_bridge.c
+>
++++ b/hw/pci/pci_bridge.c
+>
+@@ -253,17 +253,19 @@ void pci_bridge_write_config(PCIDevice *d,
+>
+PCIBridge *s = PCI_BRIDGE(d);
+>
+uint16_t oldctl = pci_get_word(d->config + PCI_BRIDGE_CONTROL);
+>
+uint16_t newctl;
+>
++    uint64_t barmask = (1 << len * 8) - 1;
+>
+>
+pci_default_write_config(d, address, val, len);
+>
+>
+if (ranges_overlap(address, len, PCI_COMMAND, 2) ||
+>
+>
+-        /* io base/limit */
+>
+-        ranges_overlap(address, len, PCI_IO_BASE, 2) ||
+>
++        (val != barmask &&
+>
++     /* io base/limit */
+>
++        (ranges_overlap(address, len, PCI_IO_BASE, 2) ||
+>
+>
+/* memory base/limit, prefetchable base/limit and
+>
+io base/limit upper 16 */
+>
+-        ranges_overlap(address, len, PCI_MEMORY_BASE, 20) ||
+>
++        ranges_overlap(address, len, PCI_MEMORY_BASE, 20))) ||
+>
+>
+/* vga enable */
+>
+ranges_overlap(address, len, PCI_BRIDGE_CONTROL, 2)) {
+>
+--
+>
+1.8.3.1
+>
+>
+Sorry, please ignore the patch above.
+
+Here is the patch I want to post:
+
+diff --git a/hw/pci/pci.c b/hw/pci/pci.c
+index 56b13b3..38a300f 100644
+--- a/hw/pci/pci.c
++++ b/hw/pci/pci.c
+@@ -1361,6 +1361,7 @@ void pci_default_write_config(PCIDevice *d, uint32_t 
+addr, uint32_t val_in, int
+ {
+     int i, was_irq_disabled = pci_irq_disabled(d);
+     uint32_t val = val_in;
++    uint64_t barmask = ((uint64_t)1 << l*8) - 1;
+ 
+     for (i = 0; i < l; val >>= 8, ++i) {
+         uint8_t wmask = d->wmask[addr + i];
+@@ -1369,9 +1370,10 @@ void pci_default_write_config(PCIDevice *d, uint32_t 
+addr, uint32_t val_in, int
+         d->config[addr + i] = (d->config[addr + i] & ~wmask) | (val & wmask);
+         d->config[addr + i] &= ~(val & w1cmask); /* W1C: Write 1 to Clear */
+     }
+-    if (ranges_overlap(addr, l, PCI_BASE_ADDRESS_0, 24) ||
++    if ((val_in != barmask &&
++       (ranges_overlap(addr, l, PCI_BASE_ADDRESS_0, 24) ||
+         ranges_overlap(addr, l, PCI_ROM_ADDRESS, 4) ||
+-        ranges_overlap(addr, l, PCI_ROM_ADDRESS1, 4) ||
++        ranges_overlap(addr, l, PCI_ROM_ADDRESS1, 4))) ||
+         range_covers_byte(addr, l, PCI_COMMAND))
+         pci_update_mappings(d);
+ 
+diff --git a/hw/pci/pci_bridge.c b/hw/pci/pci_bridge.c
+index ee9dff2..b8f7d48 100644
+--- a/hw/pci/pci_bridge.c
++++ b/hw/pci/pci_bridge.c
+@@ -253,20 +253,22 @@ void pci_bridge_write_config(PCIDevice *d,
+     PCIBridge *s = PCI_BRIDGE(d);
+     uint16_t oldctl = pci_get_word(d->config + PCI_BRIDGE_CONTROL);
+     uint16_t newctl;
++    uint64_t barmask = ((uint64_t)1 << len * 8) - 1;
+ 
+     pci_default_write_config(d, address, val, len);
+ 
+     if (ranges_overlap(address, len, PCI_COMMAND, 2) ||
+ 
+-        /* io base/limit */
+-        ranges_overlap(address, len, PCI_IO_BASE, 2) ||
++        /* vga enable */
++        ranges_overlap(address, len, PCI_BRIDGE_CONTROL, 2) ||
+ 
+-        /* memory base/limit, prefetchable base/limit and
+-           io base/limit upper 16 */
+-        ranges_overlap(address, len, PCI_MEMORY_BASE, 20) ||
++        (val != barmask &&
++        /* io base/limit */
++         (ranges_overlap(address, len, PCI_IO_BASE, 2) ||
+ 
+-        /* vga enable */
+-        ranges_overlap(address, len, PCI_BRIDGE_CONTROL, 2)) {
++         /* memory base/limit, prefetchable base/limit and
++            io base/limit upper 16 */
++         ranges_overlap(address, len, PCI_MEMORY_BASE, 20)))) {
+         pci_bridge_update_mappings(s);
+     }
+ 
+-- 
+1.8.3.1
+
diff --git a/results/classifier/009/other/70416488 b/results/classifier/009/other/70416488
new file mode 100644
index 000000000..5f39bbe71
--- /dev/null
+++ b/results/classifier/009/other/70416488
@@ -0,0 +1,1189 @@
+other: 0.980
+semantic: 0.975
+graphic: 0.972
+debug: 0.967
+device: 0.945
+performance: 0.939
+permissions: 0.922
+PID: 0.916
+vnc: 0.910
+KVM: 0.908
+boot: 0.897
+network: 0.881
+socket: 0.870
+files: 0.858
+
+[Bug Report] smmuv3 event 0x10 report when running virtio-blk-pci
+
+Hi All,
+
+When I tested mainline qemu(commit 7b87a25f49), it reports smmuv3 event 0x10
+during kernel booting up.
+
+qemu command which I use is as below:
+
+qemu-system-aarch64 -machine virt,kernel_irqchip=on,gic-version=3,iommu=smmuv3 \
+-kernel Image -initrd minifs.cpio.gz \
+-enable-kvm -net none -nographic -m 3G -smp 6 -cpu host \
+-append 'rdinit=init console=ttyAMA0 ealycon=pl0ll,0x90000000 maxcpus=3' \
+-device 
+pcie-root-port,port=0x8,chassis=0,id=pci.0,bus=pcie.0,multifunction=on,addr=0x2 
+\
+-device pcie-root-port,port=0x9,chassis=1,id=pci.1,bus=pcie.0,addr=0x2.0x1 \
+-device 
+virtio-blk-pci,drive=drive0,id=virtblk0,num-queues=8,packed=on,bus=pci.1 \
+-drive file=/home/boot.img,if=none,id=drive0,format=raw
+
+smmuv3 event 0x10 log:
+[...]
+[    1.962656] virtio-pci 0000:02:00.0: Adding to iommu group 0
+[    1.963150] virtio-pci 0000:02:00.0: enabling device (0000 -> 0002)
+[    1.964707] virtio_blk virtio0: 6/0/0 default/read/poll queues
+[    1.965759] virtio_blk virtio0: [vda] 2097152 512-byte logical blocks (1.07 
+GB/1.00 GiB)
+[    1.966934] arm-smmu-v3 9050000.smmuv3: event 0x10 received:
+[    1.967442] input: gpio-keys as /devices/platform/gpio-keys/input/input0
+[    1.967478] arm-smmu-v3 9050000.smmuv3:      0x0000020000000010
+[    1.968381] clk: Disabling unused clocks
+[    1.968677] arm-smmu-v3 9050000.smmuv3:      0x0000020000000000
+[    1.968990] PM: genpd: Disabling unused power domains
+[    1.969424] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+[    1.969814] ALSA device list:
+[    1.970240] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+[    1.970471]   No soundcards found.
+[    1.970902] arm-smmu-v3 9050000.smmuv3: event 0x10 received:
+[    1.971600] arm-smmu-v3 9050000.smmuv3:      0x0000020000000010
+[    1.971601] arm-smmu-v3 9050000.smmuv3:      0x0000020000000000
+[    1.971601] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+[    1.971602] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+[    1.971606] arm-smmu-v3 9050000.smmuv3: event 0x10 received:
+[    1.971607] arm-smmu-v3 9050000.smmuv3:      0x0000020000000010
+[    1.974202] arm-smmu-v3 9050000.smmuv3:      0x0000020000000000
+[    1.974634] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+[    1.975005] Freeing unused kernel memory: 10112K
+[    1.975062] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+[    1.975442] Run init as init process
+
+Another information is that if "maxcpus=3" is removed from the kernel command 
+line,
+it will be OK.
+
+I am not sure if there is a bug about vsmmu. It will be very appreciated if 
+anyone
+know this issue or can take a look at it.
+
+Thanks,
+Zhou
+
+On Mon, 9 Sept 2024 at 15:22, Zhou Wang via <qemu-devel@nongnu.org> wrote:
+>
+>
+Hi All,
+>
+>
+When I tested mainline qemu(commit 7b87a25f49), it reports smmuv3 event 0x10
+>
+during kernel booting up.
+Does it still do this if you either:
+ (1) use the v9.1.0 release (commit fd1952d814da)
+ (2) use "-machine virt-9.1" instead of "-machine virt"
+
+?
+
+My suspicion is that this will have started happening now that
+we expose an SMMU with two-stage translation support to the guest
+in the "virt" machine type (which we do not if you either
+use virt-9.1 or in the v9.1.0 release).
+
+I've cc'd Eric (smmuv3 maintainer) and Mostafa (author of
+the two-stage support).
+
+>
+qemu command which I use is as below:
+>
+>
+qemu-system-aarch64 -machine
+>
+virt,kernel_irqchip=on,gic-version=3,iommu=smmuv3 \
+>
+-kernel Image -initrd minifs.cpio.gz \
+>
+-enable-kvm -net none -nographic -m 3G -smp 6 -cpu host \
+>
+-append 'rdinit=init console=ttyAMA0 ealycon=pl0ll,0x90000000 maxcpus=3' \
+>
+-device
+>
+pcie-root-port,port=0x8,chassis=0,id=pci.0,bus=pcie.0,multifunction=on,addr=0x2
+>
+\
+>
+-device pcie-root-port,port=0x9,chassis=1,id=pci.1,bus=pcie.0,addr=0x2.0x1 \
+>
+-device
+>
+virtio-blk-pci,drive=drive0,id=virtblk0,num-queues=8,packed=on,bus=pci.1 \
+>
+-drive file=/home/boot.img,if=none,id=drive0,format=raw
+>
+>
+smmuv3 event 0x10 log:
+>
+[...]
+>
+[    1.962656] virtio-pci 0000:02:00.0: Adding to iommu group 0
+>
+[    1.963150] virtio-pci 0000:02:00.0: enabling device (0000 -> 0002)
+>
+[    1.964707] virtio_blk virtio0: 6/0/0 default/read/poll queues
+>
+[    1.965759] virtio_blk virtio0: [vda] 2097152 512-byte logical blocks
+>
+(1.07 GB/1.00 GiB)
+>
+[    1.966934] arm-smmu-v3 9050000.smmuv3: event 0x10 received:
+>
+[    1.967442] input: gpio-keys as /devices/platform/gpio-keys/input/input0
+>
+[    1.967478] arm-smmu-v3 9050000.smmuv3:      0x0000020000000010
+>
+[    1.968381] clk: Disabling unused clocks
+>
+[    1.968677] arm-smmu-v3 9050000.smmuv3:      0x0000020000000000
+>
+[    1.968990] PM: genpd: Disabling unused power domains
+>
+[    1.969424] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+[    1.969814] ALSA device list:
+>
+[    1.970240] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+[    1.970471]   No soundcards found.
+>
+[    1.970902] arm-smmu-v3 9050000.smmuv3: event 0x10 received:
+>
+[    1.971600] arm-smmu-v3 9050000.smmuv3:      0x0000020000000010
+>
+[    1.971601] arm-smmu-v3 9050000.smmuv3:      0x0000020000000000
+>
+[    1.971601] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+[    1.971602] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+[    1.971606] arm-smmu-v3 9050000.smmuv3: event 0x10 received:
+>
+[    1.971607] arm-smmu-v3 9050000.smmuv3:      0x0000020000000010
+>
+[    1.974202] arm-smmu-v3 9050000.smmuv3:      0x0000020000000000
+>
+[    1.974634] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+[    1.975005] Freeing unused kernel memory: 10112K
+>
+[    1.975062] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+[    1.975442] Run init as init process
+>
+>
+Another information is that if "maxcpus=3" is removed from the kernel command
+>
+line,
+>
+it will be OK.
+>
+>
+I am not sure if there is a bug about vsmmu. It will be very appreciated if
+>
+anyone
+>
+know this issue or can take a look at it.
+thanks
+-- PMM
+
+On 2024/9/9 22:31, Peter Maydell wrote:
+>
+On Mon, 9 Sept 2024 at 15:22, Zhou Wang via <qemu-devel@nongnu.org> wrote:
+>
+>
+>
+> Hi All,
+>
+>
+>
+> When I tested mainline qemu(commit 7b87a25f49), it reports smmuv3 event 0x10
+>
+> during kernel booting up.
+>
+>
+Does it still do this if you either:
+>
+(1) use the v9.1.0 release (commit fd1952d814da)
+>
+(2) use "-machine virt-9.1" instead of "-machine virt"
+I tested above two cases, the problem is still there.
+
+>
+>
+?
+>
+>
+My suspicion is that this will have started happening now that
+>
+we expose an SMMU with two-stage translation support to the guest
+>
+in the "virt" machine type (which we do not if you either
+>
+use virt-9.1 or in the v9.1.0 release).
+>
+>
+I've cc'd Eric (smmuv3 maintainer) and Mostafa (author of
+>
+the two-stage support).
+>
+>
+> qemu command which I use is as below:
+>
+>
+>
+> qemu-system-aarch64 -machine
+>
+> virt,kernel_irqchip=on,gic-version=3,iommu=smmuv3 \
+>
+> -kernel Image -initrd minifs.cpio.gz \
+>
+> -enable-kvm -net none -nographic -m 3G -smp 6 -cpu host \
+>
+> -append 'rdinit=init console=ttyAMA0 ealycon=pl0ll,0x90000000 maxcpus=3' \
+>
+> -device
+>
+> pcie-root-port,port=0x8,chassis=0,id=pci.0,bus=pcie.0,multifunction=on,addr=0x2
+>
+>  \
+>
+> -device pcie-root-port,port=0x9,chassis=1,id=pci.1,bus=pcie.0,addr=0x2.0x1 \
+>
+> -device
+>
+> virtio-blk-pci,drive=drive0,id=virtblk0,num-queues=8,packed=on,bus=pci.1 \
+>
+> -drive file=/home/boot.img,if=none,id=drive0,format=raw
+>
+>
+>
+> smmuv3 event 0x10 log:
+>
+> [...]
+>
+> [    1.962656] virtio-pci 0000:02:00.0: Adding to iommu group 0
+>
+> [    1.963150] virtio-pci 0000:02:00.0: enabling device (0000 -> 0002)
+>
+> [    1.964707] virtio_blk virtio0: 6/0/0 default/read/poll queues
+>
+> [    1.965759] virtio_blk virtio0: [vda] 2097152 512-byte logical blocks
+>
+> (1.07 GB/1.00 GiB)
+>
+> [    1.966934] arm-smmu-v3 9050000.smmuv3: event 0x10 received:
+>
+> [    1.967442] input: gpio-keys as /devices/platform/gpio-keys/input/input0
+>
+> [    1.967478] arm-smmu-v3 9050000.smmuv3:      0x0000020000000010
+>
+> [    1.968381] clk: Disabling unused clocks
+>
+> [    1.968677] arm-smmu-v3 9050000.smmuv3:      0x0000020000000000
+>
+> [    1.968990] PM: genpd: Disabling unused power domains
+>
+> [    1.969424] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+> [    1.969814] ALSA device list:
+>
+> [    1.970240] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+> [    1.970471]   No soundcards found.
+>
+> [    1.970902] arm-smmu-v3 9050000.smmuv3: event 0x10 received:
+>
+> [    1.971600] arm-smmu-v3 9050000.smmuv3:      0x0000020000000010
+>
+> [    1.971601] arm-smmu-v3 9050000.smmuv3:      0x0000020000000000
+>
+> [    1.971601] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+> [    1.971602] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+> [    1.971606] arm-smmu-v3 9050000.smmuv3: event 0x10 received:
+>
+> [    1.971607] arm-smmu-v3 9050000.smmuv3:      0x0000020000000010
+>
+> [    1.974202] arm-smmu-v3 9050000.smmuv3:      0x0000020000000000
+>
+> [    1.974634] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+> [    1.975005] Freeing unused kernel memory: 10112K
+>
+> [    1.975062] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+> [    1.975442] Run init as init process
+>
+>
+>
+> Another information is that if "maxcpus=3" is removed from the kernel
+>
+> command line,
+>
+> it will be OK.
+>
+>
+>
+> I am not sure if there is a bug about vsmmu. It will be very appreciated if
+>
+> anyone
+>
+> know this issue or can take a look at it.
+>
+>
+thanks
+>
+-- PMM
+>
+.
+
+Hi Zhou,
+On 9/10/24 03:24, Zhou Wang via wrote:
+>
+On 2024/9/9 22:31, Peter Maydell wrote:
+>
+> On Mon, 9 Sept 2024 at 15:22, Zhou Wang via <qemu-devel@nongnu.org> wrote:
+>
+>> Hi All,
+>
+>>
+>
+>> When I tested mainline qemu(commit 7b87a25f49), it reports smmuv3 event 0x10
+>
+>> during kernel booting up.
+>
+> Does it still do this if you either:
+>
+>  (1) use the v9.1.0 release (commit fd1952d814da)
+>
+>  (2) use "-machine virt-9.1" instead of "-machine virt"
+>
+I tested above two cases, the problem is still there.
+Thank you for reporting. I am able to reproduce and effectively the
+maxcpus kernel option is triggering the issue. It works without. I will
+come back to you asap.
+
+Eric
+>
+>
+> ?
+>
+>
+>
+> My suspicion is that this will have started happening now that
+>
+> we expose an SMMU with two-stage translation support to the guest
+>
+> in the "virt" machine type (which we do not if you either
+>
+> use virt-9.1 or in the v9.1.0 release).
+>
+>
+>
+> I've cc'd Eric (smmuv3 maintainer) and Mostafa (author of
+>
+> the two-stage support).
+>
+>
+>
+>> qemu command which I use is as below:
+>
+>>
+>
+>> qemu-system-aarch64 -machine
+>
+>> virt,kernel_irqchip=on,gic-version=3,iommu=smmuv3 \
+>
+>> -kernel Image -initrd minifs.cpio.gz \
+>
+>> -enable-kvm -net none -nographic -m 3G -smp 6 -cpu host \
+>
+>> -append 'rdinit=init console=ttyAMA0 ealycon=pl0ll,0x90000000 maxcpus=3' \
+>
+>> -device
+>
+>> pcie-root-port,port=0x8,chassis=0,id=pci.0,bus=pcie.0,multifunction=on,addr=0x2
+>
+>>  \
+>
+>> -device pcie-root-port,port=0x9,chassis=1,id=pci.1,bus=pcie.0,addr=0x2.0x1 \
+>
+>> -device
+>
+>> virtio-blk-pci,drive=drive0,id=virtblk0,num-queues=8,packed=on,bus=pci.1 \
+>
+>> -drive file=/home/boot.img,if=none,id=drive0,format=raw
+>
+>>
+>
+>> smmuv3 event 0x10 log:
+>
+>> [...]
+>
+>> [    1.962656] virtio-pci 0000:02:00.0: Adding to iommu group 0
+>
+>> [    1.963150] virtio-pci 0000:02:00.0: enabling device (0000 -> 0002)
+>
+>> [    1.964707] virtio_blk virtio0: 6/0/0 default/read/poll queues
+>
+>> [    1.965759] virtio_blk virtio0: [vda] 2097152 512-byte logical blocks
+>
+>> (1.07 GB/1.00 GiB)
+>
+>> [    1.966934] arm-smmu-v3 9050000.smmuv3: event 0x10 received:
+>
+>> [    1.967442] input: gpio-keys as /devices/platform/gpio-keys/input/input0
+>
+>> [    1.967478] arm-smmu-v3 9050000.smmuv3:      0x0000020000000010
+>
+>> [    1.968381] clk: Disabling unused clocks
+>
+>> [    1.968677] arm-smmu-v3 9050000.smmuv3:      0x0000020000000000
+>
+>> [    1.968990] PM: genpd: Disabling unused power domains
+>
+>> [    1.969424] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+>> [    1.969814] ALSA device list:
+>
+>> [    1.970240] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+>> [    1.970471]   No soundcards found.
+>
+>> [    1.970902] arm-smmu-v3 9050000.smmuv3: event 0x10 received:
+>
+>> [    1.971600] arm-smmu-v3 9050000.smmuv3:      0x0000020000000010
+>
+>> [    1.971601] arm-smmu-v3 9050000.smmuv3:      0x0000020000000000
+>
+>> [    1.971601] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+>> [    1.971602] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+>> [    1.971606] arm-smmu-v3 9050000.smmuv3: event 0x10 received:
+>
+>> [    1.971607] arm-smmu-v3 9050000.smmuv3:      0x0000020000000010
+>
+>> [    1.974202] arm-smmu-v3 9050000.smmuv3:      0x0000020000000000
+>
+>> [    1.974634] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+>> [    1.975005] Freeing unused kernel memory: 10112K
+>
+>> [    1.975062] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+>> [    1.975442] Run init as init process
+>
+>>
+>
+>> Another information is that if "maxcpus=3" is removed from the kernel
+>
+>> command line,
+>
+>> it will be OK.
+>
+>>
+>
+>> I am not sure if there is a bug about vsmmu. It will be very appreciated if
+>
+>> anyone
+>
+>> know this issue or can take a look at it.
+>
+> thanks
+>
+> -- PMM
+>
+> .
+
+Hi,
+
+On 9/10/24 03:24, Zhou Wang via wrote:
+>
+On 2024/9/9 22:31, Peter Maydell wrote:
+>
+> On Mon, 9 Sept 2024 at 15:22, Zhou Wang via <qemu-devel@nongnu.org> wrote:
+>
+>> Hi All,
+>
+>>
+>
+>> When I tested mainline qemu(commit 7b87a25f49), it reports smmuv3 event 0x10
+>
+>> during kernel booting up.
+>
+> Does it still do this if you either:
+>
+>  (1) use the v9.1.0 release (commit fd1952d814da)
+>
+>  (2) use "-machine virt-9.1" instead of "-machine virt"
+>
+I tested above two cases, the problem is still there.
+I have not much progressed yet but I see it comes with
+qemu traces.
+
+smmuv3-iommu-memory-region-0-0 translation failed for iova=0x0
+(SMMU_EVT_F_TRANSLATION)
+../..
+qemu-system-aarch64: virtio-blk failed to set guest notifier (-22),
+ensure -accel kvm is set.
+qemu-system-aarch64: virtio_bus_start_ioeventfd: failed. Fallback to
+userspace (slower).
+
+the PCIe Host bridge seems to cause that translation failure at iova=0
+
+Also virtio-iommu has the same issue:
+qemu-system-aarch64: virtio_iommu_translate no mapping for 0x0 for sid=1024
+qemu-system-aarch64: virtio-blk failed to set guest notifier (-22),
+ensure -accel kvm is set.
+qemu-system-aarch64: virtio_bus_start_ioeventfd: failed. Fallback to
+userspace (slower).
+
+Only happens with maxcpus=3. Note the virtio-blk-pci is not protected by
+the vIOMMU in your case.
+
+Thanks
+
+Eric
+
+>
+>
+> ?
+>
+>
+>
+> My suspicion is that this will have started happening now that
+>
+> we expose an SMMU with two-stage translation support to the guest
+>
+> in the "virt" machine type (which we do not if you either
+>
+> use virt-9.1 or in the v9.1.0 release).
+>
+>
+>
+> I've cc'd Eric (smmuv3 maintainer) and Mostafa (author of
+>
+> the two-stage support).
+>
+>
+>
+>> qemu command which I use is as below:
+>
+>>
+>
+>> qemu-system-aarch64 -machine
+>
+>> virt,kernel_irqchip=on,gic-version=3,iommu=smmuv3 \
+>
+>> -kernel Image -initrd minifs.cpio.gz \
+>
+>> -enable-kvm -net none -nographic -m 3G -smp 6 -cpu host \
+>
+>> -append 'rdinit=init console=ttyAMA0 ealycon=pl0ll,0x90000000 maxcpus=3' \
+>
+>> -device
+>
+>> pcie-root-port,port=0x8,chassis=0,id=pci.0,bus=pcie.0,multifunction=on,addr=0x2
+>
+>>  \
+>
+>> -device pcie-root-port,port=0x9,chassis=1,id=pci.1,bus=pcie.0,addr=0x2.0x1 \
+>
+>> -device
+>
+>> virtio-blk-pci,drive=drive0,id=virtblk0,num-queues=8,packed=on,bus=pci.1 \
+>
+>> -drive file=/home/boot.img,if=none,id=drive0,format=raw
+>
+>>
+>
+>> smmuv3 event 0x10 log:
+>
+>> [...]
+>
+>> [    1.962656] virtio-pci 0000:02:00.0: Adding to iommu group 0
+>
+>> [    1.963150] virtio-pci 0000:02:00.0: enabling device (0000 -> 0002)
+>
+>> [    1.964707] virtio_blk virtio0: 6/0/0 default/read/poll queues
+>
+>> [    1.965759] virtio_blk virtio0: [vda] 2097152 512-byte logical blocks
+>
+>> (1.07 GB/1.00 GiB)
+>
+>> [    1.966934] arm-smmu-v3 9050000.smmuv3: event 0x10 received:
+>
+>> [    1.967442] input: gpio-keys as /devices/platform/gpio-keys/input/input0
+>
+>> [    1.967478] arm-smmu-v3 9050000.smmuv3:      0x0000020000000010
+>
+>> [    1.968381] clk: Disabling unused clocks
+>
+>> [    1.968677] arm-smmu-v3 9050000.smmuv3:      0x0000020000000000
+>
+>> [    1.968990] PM: genpd: Disabling unused power domains
+>
+>> [    1.969424] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+>> [    1.969814] ALSA device list:
+>
+>> [    1.970240] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+>> [    1.970471]   No soundcards found.
+>
+>> [    1.970902] arm-smmu-v3 9050000.smmuv3: event 0x10 received:
+>
+>> [    1.971600] arm-smmu-v3 9050000.smmuv3:      0x0000020000000010
+>
+>> [    1.971601] arm-smmu-v3 9050000.smmuv3:      0x0000020000000000
+>
+>> [    1.971601] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+>> [    1.971602] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+>> [    1.971606] arm-smmu-v3 9050000.smmuv3: event 0x10 received:
+>
+>> [    1.971607] arm-smmu-v3 9050000.smmuv3:      0x0000020000000010
+>
+>> [    1.974202] arm-smmu-v3 9050000.smmuv3:      0x0000020000000000
+>
+>> [    1.974634] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+>> [    1.975005] Freeing unused kernel memory: 10112K
+>
+>> [    1.975062] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+>> [    1.975442] Run init as init process
+>
+>>
+>
+>> Another information is that if "maxcpus=3" is removed from the kernel
+>
+>> command line,
+>
+>> it will be OK.
+>
+>>
+>
+>> I am not sure if there is a bug about vsmmu. It will be very appreciated if
+>
+>> anyone
+>
+>> know this issue or can take a look at it.
+>
+> thanks
+>
+> -- PMM
+>
+> .
+
+Hi Zhou,
+
+On Mon, Sep 9, 2024 at 3:22 PM Zhou Wang via <qemu-devel@nongnu.org> wrote:
+>
+>
+Hi All,
+>
+>
+When I tested mainline qemu(commit 7b87a25f49), it reports smmuv3 event 0x10
+>
+during kernel booting up.
+>
+>
+qemu command which I use is as below:
+>
+>
+qemu-system-aarch64 -machine
+>
+virt,kernel_irqchip=on,gic-version=3,iommu=smmuv3 \
+>
+-kernel Image -initrd minifs.cpio.gz \
+>
+-enable-kvm -net none -nographic -m 3G -smp 6 -cpu host \
+>
+-append 'rdinit=init console=ttyAMA0 ealycon=pl0ll,0x90000000 maxcpus=3' \
+>
+-device
+>
+pcie-root-port,port=0x8,chassis=0,id=pci.0,bus=pcie.0,multifunction=on,addr=0x2
+>
+\
+>
+-device pcie-root-port,port=0x9,chassis=1,id=pci.1,bus=pcie.0,addr=0x2.0x1 \
+>
+-device
+>
+virtio-blk-pci,drive=drive0,id=virtblk0,num-queues=8,packed=on,bus=pci.1 \
+>
+-drive file=/home/boot.img,if=none,id=drive0,format=raw
+>
+>
+smmuv3 event 0x10 log:
+>
+[...]
+>
+[    1.962656] virtio-pci 0000:02:00.0: Adding to iommu group 0
+>
+[    1.963150] virtio-pci 0000:02:00.0: enabling device (0000 -> 0002)
+>
+[    1.964707] virtio_blk virtio0: 6/0/0 default/read/poll queues
+>
+[    1.965759] virtio_blk virtio0: [vda] 2097152 512-byte logical blocks
+>
+(1.07 GB/1.00 GiB)
+>
+[    1.966934] arm-smmu-v3 9050000.smmuv3: event 0x10 received:
+>
+[    1.967442] input: gpio-keys as /devices/platform/gpio-keys/input/input0
+>
+[    1.967478] arm-smmu-v3 9050000.smmuv3:      0x0000020000000010
+>
+[    1.968381] clk: Disabling unused clocks
+>
+[    1.968677] arm-smmu-v3 9050000.smmuv3:      0x0000020000000000
+>
+[    1.968990] PM: genpd: Disabling unused power domains
+>
+[    1.969424] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+[    1.969814] ALSA device list:
+>
+[    1.970240] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+[    1.970471]   No soundcards found.
+>
+[    1.970902] arm-smmu-v3 9050000.smmuv3: event 0x10 received:
+>
+[    1.971600] arm-smmu-v3 9050000.smmuv3:      0x0000020000000010
+>
+[    1.971601] arm-smmu-v3 9050000.smmuv3:      0x0000020000000000
+>
+[    1.971601] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+[    1.971602] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+[    1.971606] arm-smmu-v3 9050000.smmuv3: event 0x10 received:
+>
+[    1.971607] arm-smmu-v3 9050000.smmuv3:      0x0000020000000010
+>
+[    1.974202] arm-smmu-v3 9050000.smmuv3:      0x0000020000000000
+>
+[    1.974634] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+[    1.975005] Freeing unused kernel memory: 10112K
+>
+[    1.975062] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+[    1.975442] Run init as init process
+>
+>
+Another information is that if "maxcpus=3" is removed from the kernel command
+>
+line,
+>
+it will be OK.
+>
+That's interesting, not sure how that would be related.
+
+>
+I am not sure if there is a bug about vsmmu. It will be very appreciated if
+>
+anyone
+>
+know this issue or can take a look at it.
+>
+Can you please provide logs with adding "-d trace:smmu*" to qemu invocation.
+
+Also if possible, can you please provide which Linux kernel version
+you are using, I will see if I can repro.
+
+Thanks,
+Mostafa
+
+>
+Thanks,
+>
+Zhou
+>
+>
+>
+
+On 2024/9/9 22:47, Mostafa Saleh wrote:
+>
+Hi Zhou,
+>
+>
+On Mon, Sep 9, 2024 at 3:22 PM Zhou Wang via <qemu-devel@nongnu.org> wrote:
+>
+>
+>
+> Hi All,
+>
+>
+>
+> When I tested mainline qemu(commit 7b87a25f49), it reports smmuv3 event 0x10
+>
+> during kernel booting up.
+>
+>
+>
+> qemu command which I use is as below:
+>
+>
+>
+> qemu-system-aarch64 -machine
+>
+> virt,kernel_irqchip=on,gic-version=3,iommu=smmuv3 \
+>
+> -kernel Image -initrd minifs.cpio.gz \
+>
+> -enable-kvm -net none -nographic -m 3G -smp 6 -cpu host \
+>
+> -append 'rdinit=init console=ttyAMA0 ealycon=pl0ll,0x90000000 maxcpus=3' \
+>
+> -device
+>
+> pcie-root-port,port=0x8,chassis=0,id=pci.0,bus=pcie.0,multifunction=on,addr=0x2
+>
+>  \
+>
+> -device pcie-root-port,port=0x9,chassis=1,id=pci.1,bus=pcie.0,addr=0x2.0x1 \
+>
+> -device
+>
+> virtio-blk-pci,drive=drive0,id=virtblk0,num-queues=8,packed=on,bus=pci.1 \
+>
+> -drive file=/home/boot.img,if=none,id=drive0,format=raw
+>
+>
+>
+> smmuv3 event 0x10 log:
+>
+> [...]
+>
+> [    1.962656] virtio-pci 0000:02:00.0: Adding to iommu group 0
+>
+> [    1.963150] virtio-pci 0000:02:00.0: enabling device (0000 -> 0002)
+>
+> [    1.964707] virtio_blk virtio0: 6/0/0 default/read/poll queues
+>
+> [    1.965759] virtio_blk virtio0: [vda] 2097152 512-byte logical blocks
+>
+> (1.07 GB/1.00 GiB)
+>
+> [    1.966934] arm-smmu-v3 9050000.smmuv3: event 0x10 received:
+>
+> [    1.967442] input: gpio-keys as /devices/platform/gpio-keys/input/input0
+>
+> [    1.967478] arm-smmu-v3 9050000.smmuv3:      0x0000020000000010
+>
+> [    1.968381] clk: Disabling unused clocks
+>
+> [    1.968677] arm-smmu-v3 9050000.smmuv3:      0x0000020000000000
+>
+> [    1.968990] PM: genpd: Disabling unused power domains
+>
+> [    1.969424] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+> [    1.969814] ALSA device list:
+>
+> [    1.970240] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+> [    1.970471]   No soundcards found.
+>
+> [    1.970902] arm-smmu-v3 9050000.smmuv3: event 0x10 received:
+>
+> [    1.971600] arm-smmu-v3 9050000.smmuv3:      0x0000020000000010
+>
+> [    1.971601] arm-smmu-v3 9050000.smmuv3:      0x0000020000000000
+>
+> [    1.971601] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+> [    1.971602] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+> [    1.971606] arm-smmu-v3 9050000.smmuv3: event 0x10 received:
+>
+> [    1.971607] arm-smmu-v3 9050000.smmuv3:      0x0000020000000010
+>
+> [    1.974202] arm-smmu-v3 9050000.smmuv3:      0x0000020000000000
+>
+> [    1.974634] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+> [    1.975005] Freeing unused kernel memory: 10112K
+>
+> [    1.975062] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+> [    1.975442] Run init as init process
+>
+>
+>
+> Another information is that if "maxcpus=3" is removed from the kernel
+>
+> command line,
+>
+> it will be OK.
+>
+>
+>
+>
+That's interesting, not sure how that would be related.
+>
+>
+> I am not sure if there is a bug about vsmmu. It will be very appreciated if
+>
+> anyone
+>
+> know this issue or can take a look at it.
+>
+>
+>
+>
+Can you please provide logs with adding "-d trace:smmu*" to qemu invocation.
+Sure. Please see the attached log(using above qemu commit and command).
+
+>
+>
+Also if possible, can you please provide which Linux kernel version
+>
+you are using, I will see if I can repro.
+I just use the latest mainline kernel(commit b831f83e40a2) with defconfig.
+
+Thanks,
+Zhou
+
+>
+>
+Thanks,
+>
+Mostafa
+>
+>
+> Thanks,
+>
+> Zhou
+>
+>
+>
+>
+>
+>
+>
+>
+.
+qemu_boot_log.txt
+Description:
+Text document
+
+On Tue, Sep 10, 2024 at 2:51 AM Zhou Wang <wangzhou1@hisilicon.com> wrote:
+>
+>
+On 2024/9/9 22:47, Mostafa Saleh wrote:
+>
+> Hi Zhou,
+>
+>
+>
+> On Mon, Sep 9, 2024 at 3:22 PM Zhou Wang via <qemu-devel@nongnu.org> wrote:
+>
+>>
+>
+>> Hi All,
+>
+>>
+>
+>> When I tested mainline qemu(commit 7b87a25f49), it reports smmuv3 event
+>
+>> 0x10
+>
+>> during kernel booting up.
+>
+>>
+>
+>> qemu command which I use is as below:
+>
+>>
+>
+>> qemu-system-aarch64 -machine
+>
+>> virt,kernel_irqchip=on,gic-version=3,iommu=smmuv3 \
+>
+>> -kernel Image -initrd minifs.cpio.gz \
+>
+>> -enable-kvm -net none -nographic -m 3G -smp 6 -cpu host \
+>
+>> -append 'rdinit=init console=ttyAMA0 ealycon=pl0ll,0x90000000 maxcpus=3' \
+>
+>> -device
+>
+>> pcie-root-port,port=0x8,chassis=0,id=pci.0,bus=pcie.0,multifunction=on,addr=0x2
+>
+>>  \
+>
+>> -device pcie-root-port,port=0x9,chassis=1,id=pci.1,bus=pcie.0,addr=0x2.0x1
+>
+>> \
+>
+>> -device
+>
+>> virtio-blk-pci,drive=drive0,id=virtblk0,num-queues=8,packed=on,bus=pci.1 \
+>
+>> -drive file=/home/boot.img,if=none,id=drive0,format=raw
+>
+>>
+>
+>> smmuv3 event 0x10 log:
+>
+>> [...]
+>
+>> [    1.962656] virtio-pci 0000:02:00.0: Adding to iommu group 0
+>
+>> [    1.963150] virtio-pci 0000:02:00.0: enabling device (0000 -> 0002)
+>
+>> [    1.964707] virtio_blk virtio0: 6/0/0 default/read/poll queues
+>
+>> [    1.965759] virtio_blk virtio0: [vda] 2097152 512-byte logical blocks
+>
+>> (1.07 GB/1.00 GiB)
+>
+>> [    1.966934] arm-smmu-v3 9050000.smmuv3: event 0x10 received:
+>
+>> [    1.967442] input: gpio-keys as /devices/platform/gpio-keys/input/input0
+>
+>> [    1.967478] arm-smmu-v3 9050000.smmuv3:      0x0000020000000010
+>
+>> [    1.968381] clk: Disabling unused clocks
+>
+>> [    1.968677] arm-smmu-v3 9050000.smmuv3:      0x0000020000000000
+>
+>> [    1.968990] PM: genpd: Disabling unused power domains
+>
+>> [    1.969424] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+>> [    1.969814] ALSA device list:
+>
+>> [    1.970240] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+>> [    1.970471]   No soundcards found.
+>
+>> [    1.970902] arm-smmu-v3 9050000.smmuv3: event 0x10 received:
+>
+>> [    1.971600] arm-smmu-v3 9050000.smmuv3:      0x0000020000000010
+>
+>> [    1.971601] arm-smmu-v3 9050000.smmuv3:      0x0000020000000000
+>
+>> [    1.971601] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+>> [    1.971602] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+>> [    1.971606] arm-smmu-v3 9050000.smmuv3: event 0x10 received:
+>
+>> [    1.971607] arm-smmu-v3 9050000.smmuv3:      0x0000020000000010
+>
+>> [    1.974202] arm-smmu-v3 9050000.smmuv3:      0x0000020000000000
+>
+>> [    1.974634] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+>> [    1.975005] Freeing unused kernel memory: 10112K
+>
+>> [    1.975062] arm-smmu-v3 9050000.smmuv3:      0x0000000000000000
+>
+>> [    1.975442] Run init as init process
+>
+>>
+>
+>> Another information is that if "maxcpus=3" is removed from the kernel
+>
+>> command line,
+>
+>> it will be OK.
+>
+>>
+>
+>
+>
+> That's interesting, not sure how that would be related.
+>
+>
+>
+>> I am not sure if there is a bug about vsmmu. It will be very appreciated
+>
+>> if anyone
+>
+>> know this issue or can take a look at it.
+>
+>>
+>
+>
+>
+> Can you please provide logs with adding "-d trace:smmu*" to qemu invocation.
+>
+>
+Sure. Please see the attached log(using above qemu commit and command).
+>
+Thanks a lot, it seems the SMMUv3 indeed receives a translation
+request with addr 0x0 which causes this event.
+I don't see any kind of modification (alignment) of the address in this path.
+So my hunch it's not related to the SMMUv3 and the initiator is
+issuing bogus addresses.
+
+>
+>
+>
+> Also if possible, can you please provide which Linux kernel version
+>
+> you are using, I will see if I can repro.
+>
+>
+I just use the latest mainline kernel(commit b831f83e40a2) with defconfig.
+>
+I see, I can't repro in my setup which has no "--enable-kvm" and with
+"-cpu max" instead of host.
+I will try other options and see if I can repro.
+
+Thanks,
+Mostafa
+>
+Thanks,
+>
+Zhou
+>
+>
+>
+>
+> Thanks,
+>
+> Mostafa
+>
+>
+>
+>> Thanks,
+>
+>> Zhou
+>
+>>
+>
+>>
+>
+>>
+>
+>
+>
+> .
+
diff --git a/results/classifier/009/other/81775929 b/results/classifier/009/other/81775929
new file mode 100644
index 000000000..0c1809c02
--- /dev/null
+++ b/results/classifier/009/other/81775929
@@ -0,0 +1,245 @@
+other: 0.877
+permissions: 0.849
+PID: 0.847
+performance: 0.831
+vnc: 0.825
+semantic: 0.825
+graphic: 0.818
+KVM: 0.815
+socket: 0.810
+debug: 0.799
+files: 0.788
+device: 0.777
+network: 0.759
+boot: 0.742
+
+[Qemu-devel] [BUG] Monitor QMP is broken ?
+
+Hello!
+
+ I have updated my qemu to the recent version and it seems to have lost 
+compatibility with
+libvirt. The error message is:
+--- cut ---
+internal error: unable to execute QEMU command 'qmp_capabilities': QMP input 
+object member
+'id' is unexpected
+--- cut ---
+ What does it mean? Is it intentional or not?
+
+Kind regards,
+Pavel Fedin
+Expert Engineer
+Samsung Electronics Research center Russia
+
+Hello! 
+
+>
+I have updated my qemu to the recent version and it seems to have lost
+>
+compatibility
+with
+>
+libvirt. The error message is:
+>
+--- cut ---
+>
+internal error: unable to execute QEMU command 'qmp_capabilities': QMP input
+>
+object
+>
+member
+>
+'id' is unexpected
+>
+--- cut ---
+>
+What does it mean? Is it intentional or not?
+I have found the problem. It is caused by commit
+65207c59d99f2260c5f1d3b9c491146616a522aa. libvirt does not seem to use the 
+removed
+asynchronous interface but it still feeds in JSONs with 'id' field set to 
+something. So i
+think the related fragment in qmp_check_input_obj() function should be brought 
+back
+
+Kind regards,
+Pavel Fedin
+Expert Engineer
+Samsung Electronics Research center Russia
+
+On Fri, Jun 05, 2015 at 04:58:46PM +0300, Pavel Fedin wrote:
+>
+Hello!
+>
+>
+>  I have updated my qemu to the recent version and it seems to have lost
+>
+> compatibility
+>
+with
+>
+> libvirt. The error message is:
+>
+> --- cut ---
+>
+> internal error: unable to execute QEMU command 'qmp_capabilities': QMP
+>
+> input object
+>
+> member
+>
+> 'id' is unexpected
+>
+> --- cut ---
+>
+>  What does it mean? Is it intentional or not?
+>
+>
+I have found the problem. It is caused by commit
+>
+65207c59d99f2260c5f1d3b9c491146616a522aa. libvirt does not seem to use the
+>
+removed
+>
+asynchronous interface but it still feeds in JSONs with 'id' field set to
+>
+something. So i
+>
+think the related fragment in qmp_check_input_obj() function should be
+>
+brought back
+If QMP is rejecting the 'id' parameter that is a regression bug.
+
+[quote]
+The QMP spec says
+
+2.3 Issuing Commands
+--------------------
+
+The format for command execution is:
+
+{ "execute": json-string, "arguments": json-object, "id": json-value }
+
+ Where,
+
+- The "execute" member identifies the command to be executed by the Server
+- The "arguments" member is used to pass any arguments required for the
+  execution of the command, it is optional when no arguments are
+  required. Each command documents what contents will be considered
+  valid when handling the json-argument
+- The "id" member is a transaction identification associated with the
+  command execution, it is optional and will be part of the response if
+  provided. The "id" member can be any json-value, although most
+  clients merely use a json-number incremented for each successive
+  command
+
+
+2.4 Commands Responses
+----------------------
+
+There are two possible responses which the Server will issue as the result
+of a command execution: success or error.
+
+2.4.1 success
+-------------
+
+The format of a success response is:
+
+{ "return": json-value, "id": json-value }
+
+ Where,
+
+- The "return" member contains the data returned by the command, which
+  is defined on a per-command basis (usually a json-object or
+  json-array of json-objects, but sometimes a json-number, json-string,
+  or json-array of json-strings); it is an empty json-object if the
+  command does not return data
+- The "id" member contains the transaction identification associated
+  with the command execution if issued by the Client
+
+[/quote]
+
+And as such, libvirt chose to /always/ send an 'id' parameter in all
+commands it issues.
+
+We don't however validate the id in the reply, though arguably we
+should have done so.
+
+Regards,
+Daniel
+-- 
+|:
+http://berrange.com
+-o-
+http://www.flickr.com/photos/dberrange/
+:|
+|:
+http://libvirt.org
+-o-
+http://virt-manager.org
+:|
+|:
+http://autobuild.org
+-o-
+http://search.cpan.org/~danberr/
+:|
+|:
+http://entangle-photo.org
+-o-
+http://live.gnome.org/gtk-vnc
+:|
+
+"Daniel P. Berrange" <address@hidden> writes:
+
+>
+On Fri, Jun 05, 2015 at 04:58:46PM +0300, Pavel Fedin wrote:
+>
+>  Hello!
+>
+>
+>
+> >  I have updated my qemu to the recent version and it seems to have
+>
+> > lost compatibility
+>
+> with
+>
+> > libvirt. The error message is:
+>
+> > --- cut ---
+>
+> > internal error: unable to execute QEMU command 'qmp_capabilities':
+>
+> > QMP input object
+>
+> > member
+>
+> > 'id' is unexpected
+>
+> > --- cut ---
+>
+> >  What does it mean? Is it intentional or not?
+>
+>
+>
+>  I have found the problem. It is caused by commit
+>
+> 65207c59d99f2260c5f1d3b9c491146616a522aa. libvirt does not seem to
+>
+> use the removed
+>
+> asynchronous interface but it still feeds in JSONs with 'id' field
+>
+> set to something. So i
+>
+> think the related fragment in qmp_check_input_obj() function should
+>
+> be brought back
+>
+>
+If QMP is rejecting the 'id' parameter that is a regression bug.
+It is definitely a regression, my fault, and I'll get it fixed a.s.a.p.
+
+[...]
+