summary refs log tree commit diff stats
path: root/results/classifier/108/other/1824
diff options
context:
space:
mode:
Diffstat (limited to '')
-rw-r--r--results/classifier/108/other/182416
-rw-r--r--results/classifier/108/other/1824053102
-rw-r--r--results/classifier/108/other/182461629
-rw-r--r--results/classifier/108/other/182462234
-rw-r--r--results/classifier/108/other/1824704102
-rw-r--r--results/classifier/108/other/182474431
6 files changed, 314 insertions, 0 deletions
diff --git a/results/classifier/108/other/1824 b/results/classifier/108/other/1824
new file mode 100644
index 000000000..2b72a13ac
--- /dev/null
+++ b/results/classifier/108/other/1824
@@ -0,0 +1,16 @@
+device: 0.684
+network: 0.610
+semantic: 0.406
+performance: 0.381
+PID: 0.369
+files: 0.356
+debug: 0.345
+boot: 0.303
+permissions: 0.300
+graphic: 0.270
+socket: 0.249
+other: 0.186
+vnc: 0.133
+KVM: 0.006
+
+[8.x] qemu-user does not build under CentOS 7 any longer
diff --git a/results/classifier/108/other/1824053 b/results/classifier/108/other/1824053
new file mode 100644
index 000000000..a1f82c66c
--- /dev/null
+++ b/results/classifier/108/other/1824053
@@ -0,0 +1,102 @@
+other: 0.790
+graphic: 0.715
+semantic: 0.704
+permissions: 0.674
+device: 0.670
+debug: 0.657
+performance: 0.641
+PID: 0.603
+files: 0.548
+socket: 0.527
+network: 0.516
+boot: 0.496
+KVM: 0.304
+vnc: 0.201
+
+Qemu-img convert appears to be stuck on aarch64 host with low probability
+
+Hi,  I found a problem that qemu-img convert appears to be stuck on aarch64 host with low probability.
+
+The convert command  line is  "qemu-img convert -f qcow2 -O raw disk.qcow2 disk.raw ".
+
+The bt is below:
+
+Thread 2 (Thread 0x40000b776e50 (LWP 27215)):
+#0  0x000040000a3f2994 in sigtimedwait () from /lib64/libc.so.6
+#1  0x000040000a39c60c in sigwait () from /lib64/libpthread.so.0
+#2  0x0000aaaaaae82610 in sigwait_compat (opaque=0xaaaac5163b00) at util/compatfd.c:37
+#3  0x0000aaaaaae85038 in qemu_thread_start (args=args@entry=0xaaaac5163b90) at util/qemu_thread_posix.c:496
+#4  0x000040000a3918bc in start_thread () from /lib64/libpthread.so.0
+#5  0x000040000a492b2c in thread_start () from /lib64/libc.so.6
+
+Thread 1 (Thread 0x40000b573370 (LWP 27214)):
+#0  0x000040000a489020 in ppoll () from /lib64/libc.so.6
+#1  0x0000aaaaaadaefc0 in ppoll (__ss=0x0, __timeout=0x0, __nfds=<optimized out>, __fds=<optimized out>) at /usr/include/bits/poll2.h:77
+#2  qemu_poll_ns (fds=<optimized out>, nfds=<optimized out>, timeout=<optimized out>) at qemu_timer.c:391
+#3  0x0000aaaaaadae014 in os_host_main_loop_wait (timeout=<optimized out>) at main_loop.c:272
+#4  0x0000aaaaaadae190 in main_loop_wait (nonblocking=<optimized out>) at main_loop.c:534
+#5  0x0000aaaaaad97be0 in convert_do_copy (s=0xffffdc32eb48) at qemu-img.c:1923
+#6  0x0000aaaaaada2d70 in img_convert (argc=<optimized out>, argv=<optimized out>) at qemu-img.c:2414
+#7  0x0000aaaaaad99ac4 in main (argc=7, argv=<optimized out>) at qemu-img.c:5305
+
+
+The problem seems to be very similar to the phenomenon described by this patch (https://resources.ovirt.org/pub/ovirt-4.1/src/qemu-kvm-ev/0025-aio_notify-force-main-loop-wakeup-with-SIGIO-aarch64.patch), 
+
+which force main loop wakeup with SIGIO.  But this patch was reverted by the patch (http://ovirt.repo.nfrance.com/src/qemu-kvm-ev/kvm-Revert-aio_notify-force-main-loop-wakeup-with-SIGIO-.patch).
+
+The problem still seems to exist in aarch64 host. The qemu version I used is 2.8.1. The host version is 4.19.28-1.2.108.aarch64.
+ Do you have any solutions to fix it?  Thanks for your reply !
+
+
+Anyone else has a similar problem?
+
+I  can't reproduce this problem with  qemu.git/matser?  It seems to have been fixed in qemu.git/matser.
+
+But  I haven't found which patch fixed this problem from QEMU version 2.8.1 to  qemu.git/matser. 
+
+Could anybody give me some suggestions? Thanks for your reply.
+
+Hi, unfortunately a lot has changed from 2.8 and it might be hard to identify a single individual fix that may be responsible for this; there are aio_context fixes that go in nearly every version.
+
+It may be quickest (unfortunately) to start git-bisecting the problem to see if you can identify which build alleviates the behavior to see if it isn't something you can backport directly -- but you might find that this particular fix has a lot of requisites and you might find it difficult to backport to 2.8.1.
+
+Best of luck,
+--js
+
+
+
+Marking this bug as fixed according to comment 2.
+
+dann frazier met the same problem as me in (https://bugs.launchpad.net/qemu/+bug/1805256).
+
+He said this bugs still persists w/ latest upstream (@ afccfc0). His reply to me is below:
+
+No, sorry - this bugs still persists w/ latest upstream (@ afccfc0). I found a report of similar symptoms:
+
+  https://patchwork.kernel.org/patch/10047341/
+  https://bugzilla.redhat.com/show_bug.cgi?id=1524770#c13
+
+To be clear, ^ is already fixed upstream, so it is not the *same* issue - but perhaps related.
+
+
+Ok, we can track the bug reported by Dann Frazier in ticket 1805256 instead.
+
+I can reproduce this problem with qemu.git/matser. It still exists in qemu.git/matser. I found that when an IO return in
+worker threads and want to call aio_notify to wake up main_loop, but it found that ctx->notify_me is cleared to 0 by main_loop in aio_ctx_check by calling atomic_and(&ctx->notify_me, ~1) . So worker thread won't write enventfd to notify main_loop. If such a scene happens, the main_loop will hang:
+
+   main loop                                   worker thread1                         worker thread2
+---------------------------------------------------------------------------------------------------------------------        
+     qemu_poll_ns                            aio_worker        
+                                        qemu_bh_schedule(pool->completion_bh)                              
+    glib_pollfds_poll
+    g_main_context_check
+    aio_ctx_check                                                                     aio_worker                                                                                       
+    atomic_and(&ctx->notify_me, ~1)                                                                   
+                                                                               qemu_bh_schedule(pool->completion_bh)
+    /* do something for event */   
+    qemu_poll_ns
+    /* hangs !!!*/  
+
+
+As we known ,ctx->notify_me will be visited by worker thread and main loop. I thank we should add a lock protection for ctx->notify_me to avoid this happend.
+
diff --git a/results/classifier/108/other/1824616 b/results/classifier/108/other/1824616
new file mode 100644
index 000000000..f4f7ade9d
--- /dev/null
+++ b/results/classifier/108/other/1824616
@@ -0,0 +1,29 @@
+device: 0.767
+graphic: 0.644
+semantic: 0.635
+network: 0.625
+performance: 0.578
+socket: 0.542
+PID: 0.481
+files: 0.469
+permissions: 0.466
+other: 0.447
+vnc: 0.431
+boot: 0.364
+debug: 0.355
+KVM: 0.202
+
+Build succeeds despite flex/bison missing
+
+I just built qemu using a fresh install, and "make" would report success despite messages of "flex: command not found" and "bison: command not found".
+
+I didn't notice any errors, but I don't know whether that's because there's a workaround in case the tools aren't there, or because I didn't exercize the code paths that would fail.
+
+s/install/git clone/
+
+I think we fixed this at one point in time during the past two years ... can we close this issue now, or could you still reproduce this with the latest version of QEMU?
+
+The warning was supposedly removed by https://github.com/qemu/qemu/commit/67953a379ea5 / https://lists.gnu.org/archive/html/qemu-devel/2020-05/msg03980.html
+
+Yes, let's mark this as fixed now.
+
diff --git a/results/classifier/108/other/1824622 b/results/classifier/108/other/1824622
new file mode 100644
index 000000000..7535a64ab
--- /dev/null
+++ b/results/classifier/108/other/1824622
@@ -0,0 +1,34 @@
+network: 0.872
+files: 0.745
+device: 0.685
+debug: 0.666
+semantic: 0.566
+vnc: 0.558
+performance: 0.541
+socket: 0.453
+graphic: 0.439
+PID: 0.390
+permissions: 0.344
+boot: 0.343
+other: 0.285
+KVM: 0.117
+
+Qemu 4.0.0-rc3 COLO Primary Crashes with "Assertion `event_unhandled_count > 0' failed."
+
+Hello Everyone,
+Now with Qemu 4.0.0-rc3, COLO is finally working so I gave it a try, but the Primary is always crashing during Network use. Typing fast in ssh or running "top" with 0.1 second delay (change with 'd') reliably trigger the crash for me. I use the attached scripts to run Qemu, in my case both primary and secondary run on the same Host for testing purposes. See the files in the attached .tar.bz2 for more Info, they also contain a Coredump.
+
+Regards,
+Lukas Straub
+
+Configure CMDline:
+./configure --target-list=x86_64-softmmu,i386-softmmu --enable-debug-info
+
+
+
+https://lists.nongnu.org/archive/html/qemu-discuss/2019-04/msg00026.html
+
+There is a Patch available which fixes this bug: https://lists.nongnu.org/archive/html/qemu-devel/2019-04/msg03497.html
+
+Fix applied to qemu 4.1
+
diff --git a/results/classifier/108/other/1824704 b/results/classifier/108/other/1824704
new file mode 100644
index 000000000..794901175
--- /dev/null
+++ b/results/classifier/108/other/1824704
@@ -0,0 +1,102 @@
+semantic: 0.803
+other: 0.686
+permissions: 0.676
+device: 0.639
+performance: 0.607
+PID: 0.558
+graphic: 0.557
+socket: 0.372
+network: 0.345
+vnc: 0.291
+debug: 0.283
+files: 0.216
+boot: 0.209
+KVM: 0.144
+
+-k tr not working after v20171217! turkish keyboard dont working
+
+hi qemu
+
+-k tr not working after v20171217! turkish keyboard dont working
+
+last working without proplem at v20171217!
+
+
+after this version  tr keyboard prople.
+freedos  , winpe  ,  linux images   all dont working tr  turkish keyboard.
+
+example   press key " ç "  show " , " 
+example 2 press key " . "  show " ç " 
+
+tr keyboard work  always "en-us" kbd.
+:((((((((
+
+
+
+please fix this critical bug. 
+
+Sincerely
+
+Can you find out which commit broke the keyboard for you? (By using "git bisect" for example)
+
+24 Eylül 2019 Salı tarihinde Thomas Huth <email address hidden> yazdı:
+
+> Can you find out which commit broke the keyboard for you? (By using "git
+> bisect" for example)
+>
+> ** Information type changed from Private Security to Public
+>
+> --
+> You received this bug notification because you are subscribed to the bug
+> report.
+> https://bugs.launchpad.net/bugs/1824704
+>
+> Title:
+>   -k tr not working after v20171217! turkish keyboard dont working
+>
+> Status in QEMU:
+>   New
+>
+> Bug description:
+>   hi qemu
+>
+>   -k tr not working after v20171217! turkish keyboard dont working
+>
+>   last working without proplem at v20171217!
+>
+>   after this version  tr keyboard proplem.
+>   freedos  , winpe  ,  linux images   all dont working tr  turkish
+> keyboard.
+>
+>   example   press key " ç "  show " , "
+>   example 2 press key " . "  show " ç "
+>
+>   tr keyboard work  always "en-us" kbd.
+>   :((((((((
+>
+>   please fix this critical bug.
+>
+>   Sincerely
+>
+> To manage notifications about this bug go to:
+> https://bugs.launchpad.net/qemu/+bug/1824704/+subscriptions
+>
+
+
+Not working turkish q keyboard
+
+Ç pres result,
+. Press. Result ç
+
+
+I meant which version of QEMU is still working for you? Which version fails?
+
+What does "localectl" print (both host and guest please)?
+
+  after v20171217 all versions.
+
+
+What does v20171217 refer to? A pre-built binary? Windows? Linux? Mac OS? ... sorry, but you have to be a little bit more specific.
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/108/other/1824744 b/results/classifier/108/other/1824744
new file mode 100644
index 000000000..b5f44d274
--- /dev/null
+++ b/results/classifier/108/other/1824744
@@ -0,0 +1,31 @@
+device: 0.833
+vnc: 0.621
+graphic: 0.568
+network: 0.469
+semantic: 0.351
+files: 0.338
+boot: 0.333
+other: 0.320
+performance: 0.303
+debug: 0.264
+socket: 0.199
+PID: 0.157
+permissions: 0.145
+KVM: 0.004
+
+ivshmem PCI device exposes wrong endianness on ppc64le
+
+On a ppc64le host with a ppc64le guest running on QEMU 3.1.0 when an ivshmem device is used, the ivshmem device appears to expose the wrong endianness for the values in BAR 0.
+
+For example, when the guest is assigned an ivshmem device ID of 1, the IVPosition register (u32, offset 8 in BAR 0) returns 0x1000000 instead of 0x1. I tested on an x86_64 machine and the IVPosition reads 0x1 as expected.
+
+It seems possible that there's a ppc64*==bigendian assumption somewhere that is erroneously affecting ppc64le.
+
+
+This is an automated cleanup. This bug report has been moved to QEMU's
+new bug tracker on gitlab.com and thus gets marked as 'expired' now.
+Please continue with the discussion here:
+
+ https://gitlab.com/qemu-project/qemu/-/issues/168
+
+