summary refs log tree commit diff stats
path: root/results/classifier/108/other/1719
diff options
context:
space:
mode:
Diffstat (limited to '')
-rw-r--r--results/classifier/108/other/171922
-rw-r--r--results/classifier/108/other/1719282146
-rw-r--r--results/classifier/108/other/171933953
-rw-r--r--results/classifier/108/other/1719870102
-rw-r--r--results/classifier/108/other/171998429
5 files changed, 352 insertions, 0 deletions
diff --git a/results/classifier/108/other/1719 b/results/classifier/108/other/1719
new file mode 100644
index 000000000..aaa91c5fc
--- /dev/null
+++ b/results/classifier/108/other/1719
@@ -0,0 +1,22 @@
+graphic: 0.650
+device: 0.591
+semantic: 0.539
+socket: 0.454
+other: 0.422
+network: 0.408
+PID: 0.397
+vnc: 0.367
+files: 0.247
+boot: 0.217
+performance: 0.167
+permissions: 0.111
+debug: 0.108
+KVM: 0.017
+
+Allow TCG plugins to read memory
+Additional information:
+* `include/qemu/plugin.h`
+* `include/qemu/qemu-plugin.h`
+* `plugin/api.c`
+
+PANDA implemented this already (not sure if this solution is acceptable for the mainline QEMU): https://github.com/qemu/qemu/commit/72c661a7f141ab41fbce5e95eb3593b69f40e246
diff --git a/results/classifier/108/other/1719282 b/results/classifier/108/other/1719282
new file mode 100644
index 000000000..1ddc2ba8f
--- /dev/null
+++ b/results/classifier/108/other/1719282
@@ -0,0 +1,146 @@
+other: 0.914
+graphic: 0.866
+PID: 0.828
+performance: 0.823
+device: 0.818
+permissions: 0.798
+vnc: 0.781
+semantic: 0.773
+socket: 0.764
+debug: 0.763
+boot: 0.757
+network: 0.729
+KVM: 0.708
+files: 0.697
+
+Unable to boot after drive-mirror
+
+Hi,
+I am using "drive-mirror" qmp block-job command to transfer VM disk image to other path (different physical disk on host).
+Unfortunately after shutting down and starting from new image, VM is unable to boot and qrub enters rescue mode displaying following error:
+```
+error: file '/grub/i386-pc/normal.mod' not found.
+Entering rescue mode...
+grub rescue>
+```
+
+To investigate the problem, I compared both RAW images using linux "cmp -l" command and found out that they differ in 569028 bytes starting from address 185598977 to 252708864 which are located on /boot partition.
+
+So I mounted /boot partition of mirrored RAW image on host OS and it seems that file-system is broken and grub folder is not recognized. But /boot on original RAW image has no problem.
+
+Mirrored Image:
+ls -l /mnt/vm-boot/
+ls: cannot access /mnt/vm-boot/grub: Structure needs cleaning
+total 38168
+-rw-r--r-- 1 root root   157721 Oct 19  2016 config-3.16.0-4-amd64
+-rw-r--r-- 1 root root   129281 Sep 20  2015 config-3.2.0-4-amd64
+d????????? ? ?    ?           ?            ? grub
+-rw-r--r-- 1 root root 15739360 Nov  2  2016 initrd.img-3.16.0-4-amd64
+-rw-r--r-- 1 root root 12115412 Oct 10  2015 initrd.img-3.2.0-4-amd64
+drwxr-xr-x 2 root root    12288 Oct  7  2013 lost+found
+-rw-r--r-- 1 root root  2679264 Oct 19  2016 System.map-3.16.0-4-amd64
+-rw-r--r-- 1 root root  2114662 Sep 20  2015 System.map-3.2.0-4-amd64
+-rw-r--r-- 1 root root  3126448 Oct 19  2016 vmlinuz-3.16.0-4-amd64
+-rw-r--r-- 1 root root  2842592 Sep 20  2015 vmlinuz-3.2.0-4-amd64
+
+Original Image:
+ls /mnt/vm-boot/ -l
+total 38173
+-rw-r--r-- 1 root root   157721 Oct 19  2016 config-3.16.0-4-amd64
+-rw-r--r-- 1 root root   129281 Sep 20  2015 config-3.2.0-4-amd64
+drwxr-xr-x 5 root root     5120 Nov  2  2016 grub
+-rw-r--r-- 1 root root 15739360 Nov  2  2016 initrd.img-3.16.0-4-amd64
+-rw-r--r-- 1 root root 12115412 Oct 10  2015 initrd.img-3.2.0-4-amd64
+drwxr-xr-x 2 root root    12288 Oct  7  2013 lost+found
+-rw-r--r-- 1 root root  2679264 Oct 19  2016 System.map-3.16.0-4-amd64
+-rw-r--r-- 1 root root  2114662 Sep 20  2015 System.map-3.2.0-4-amd64
+-rw-r--r-- 1 root root  3126448 Oct 19  2016 vmlinuz-3.16.0-4-amd64
+-rw-r--r-- 1 root root  2842592 Sep 20  2015 vmlinuz-3.2.0-4-amd64
+
+ls /mnt/vm-boot/grub/ -l
+total 2376
+-rw-r--r-- 1 root root      48 Oct  7  2013 device.map
+drwxr-xr-x 2 root root    1024 Oct 10  2015 fonts
+-r--r--r-- 1 root root    9432 Nov  2  2016 grub.cfg
+-rw-r--r-- 1 root root    1024 Oct  7  2013 grubenv
+drwxr-xr-x 2 root root    6144 Aug  6  2016 i386-pc
+drwxr-xr-x 2 root root    1024 Aug  6  2016 locale
+-rw-r--r-- 1 root root 2400500 Aug  6  2016 unicode.pf2
+
+qemu Version: 2.7.0-10
+
+Host OS: Debian 8x64
+Guest OS: Debian 8x64
+
+QMP Commands log:
+socat UNIX-CONNECT:/var/run/qemu-server/48016.qmp STDIO
+{"QMP": {"version": {"qemu": {"micro": 0, "minor": 7, "major": 2}, "package": "pve-qemu-kvm_2.7.0-10"}, "capabilities": []}}
+{ "execute": "qmp_capabilities" }
+{"return": {}}
+{ "execute": "drive-mirror",
+  "arguments": {
+    "device": "drive-ide0",
+    "target": "/diskc/48016/vm-48016-disk-2.raw",
+    "sync": "full",
+    "mode": "absolute-paths",
+    "speed": 0
+  }
+}
+{"return": {}}
+{"timestamp": {"seconds": 1506331591, "microseconds": 623095}, "event": "BLOCK_JOB_READY", "data": {"device": "drive-ide0", "len": 269445758976, "offset": 269445758976, "speed": 0, "type": "mirror"}}
+{"timestamp": {"seconds": 1506332641, "microseconds": 245272}, "event": "SHUTDOWN"}
+{"timestamp": {"seconds": 1506332641, "microseconds": 377751}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "drive-ide0", "len": 271707340800, "offset": 271707340800, "speed": 0, "type": "mirror"}}
+
+Do you have more information on the sequence of commands issued to QEMU? I see the drive-mirror invocation, but then I don't see what causes the shutdown or the BLOCK_JOB_COMPLETED event. Usually this is in response to a user command. I'm wondering if the exact sequence issued is safe.
+
+Do you have a reproducer that I could try on my system to examine the behavior?
+
+Also, 2.7 is a bit old at this point; do you have the ability to try a version currently supported by the upstream project? (2.9 or 2.10?)
+
+
+
+In last try, vm shutdown before completing blockjob.
+So i tried again and these are the exact qmp commands which i used:
+
+Sequence of qmp commands:
+
+socat UNIX-CONNECT:/var/run/qemu-server/48016.qmp STDIO
+{"QMP": {"version": {"qemu": {"micro": 0, "minor": 7, "major": 2}, "package": "pve-qemu-kvm_2.7.0-10"}, "capabilities": []}}
+{ "execute": "qmp_capabilities" }
+{"return": {}}
+{ "execute": "drive-mirror",
+  "arguments": {
+    "device": "drive-ide0",
+    "target": "/diskb/48016/vm-48016-disk-1.raw",
+    "sync": "full",
+    "mode": "absolute-paths",
+    "speed": 0
+  }
+}
+{"return": {}}
+{"timestamp": {"seconds": 1506434603, "microseconds": 633439}, "event": "BLOCK_JOB_READY", "data": {"device": "drive-ide0", "len": 268479496192, "offset": 268479496192, "speed": 0, "type": "mirror"}}
+{ "execute": 'block-job-complete', 'arguments': { 'device': 'drive-ide0' } }
+{"return": {}}
+{"timestamp": {"seconds": 1506494590, "microseconds": 735601}, "event": "BLOCK_JOB_COMPLETED", "data": {"device": "drive-ide0", "len": 278522167296, "offset": 278522167296, "speed": 0, "type": "mirror"}}
+
+Then i poweroff VM and start it again from new image, but grub starts in rescue mode.
+
+It *looks* to me at a glance like the sequence should be safe, but I don't have any hunches for what could be going wrong, or why.
+
+Can you please post:
+
+(1) The command-line used to launch QEMU on the source machine, and
+(2) The command-line used to launch QEMU on the destination machine from the mirrored image?
+
+There is no source or destination machine. I used drive-mirror to transfer VM Image to different physical disk on same machine ("mode": "absolute-paths"). After block-job-complete and shutting down vm, I start vm again with same command with different drive path pointing to mirrored image. "-drive 'file=MIRRORED_IMAGE_PATH..".
+
+* Command line used to launch VM:
+```
+/usr/bin/kvm -id 48016 -chardev 'socket,id=qmp,path=/var/run/qemu-server/48016.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -pidfile /var/run/qemu-server/48016.pid -daemonize -smbios 'type=1,uuid=7a4b5ebc-a230-4e57-8ebc-4979a7b5a378' -name srv34197 -smp '4,sockets=1,cores=4,maxcpus=4' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vga cirrus -vnc unix:/var/run/qemu-server/48016.vnc,x509,password -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce -m 8192 -k en-us -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -chardev 'socket,path=/var/run/qemu-server/48016.qga,server,nowait,id=qga0' -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:6f368eef312d' -drive 'file=/var/lib/vz/images/48016/vm-48016-disk-1.raw,if=none,id=drive-ide0,format=raw,cache=none,aio=native,detect-zeroes=on' -device 'ide-hd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0,bootindex=100' -drive 'file=/var/lib/vz/template/iso/sysresccd-v03.iso,if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -netdev 'type=tap,id=net0,ifname=tap48016i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown' -device 'rtl8139,mac=D6:89:56:3F:38:1F,netdev=net0,bus=pci.0,addr=0x12,id=net0' -netdev 'type=tap,id=net1,ifname=tap48016i1,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown' -device 'rtl8139,mac=66:92:13:4A:6B:7E,netdev=net1,bus=pci.0,addr=0x13,id=net1'
+```
+
+
+OK, so we're only talking about migrating a disk and not a whole VM, I misunderstood. However... are you using qemu *2.7*? That's quite old! Before digging into this further I need to insist that you try on a supported release, either 4.0.1, 4.1.1, or 4.2.0.
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/108/other/1719339 b/results/classifier/108/other/1719339
new file mode 100644
index 000000000..4f5e76b0e
--- /dev/null
+++ b/results/classifier/108/other/1719339
@@ -0,0 +1,53 @@
+KVM: 0.614
+vnc: 0.506
+other: 0.456
+device: 0.429
+network: 0.422
+permissions: 0.420
+files: 0.413
+performance: 0.372
+boot: 0.370
+debug: 0.362
+semantic: 0.350
+socket: 0.341
+graphic: 0.337
+PID: 0.333
+
+serial8250: too much work for irq3
+
+It's know issue and sometimes mentioned since 2007. But it seems not fixed.
+
+http://lists.gnu.org/archive/html/qemu-devel/2008-02/msg00140.html
+https://bugzilla.redhat.com/show_bug.cgi?id=986761
+http://old-list-archives.xenproject.org/archives/html/xen-devel/2009-02/msg00696.html
+
+I don't think fixes like increases PASS_LIMIT (/drivers/tty/serial/8250.c) or remove this annoying message (https://patchwork.kernel.org/patch/3920801/) is real fix. Some fix was proposed by H. Peter Anvin  https://lkml.org/lkml/2008/2/7/485.
+
+Can reproduce on Debian Strech host (Qemu 1:2.8+dfsg-6+deb9u2), Ubuntu 16.04.2 LTS (Qemu 1:2.5+dfsg-5ubuntu10.15) also tried to use master branch (QEMU emulator version 2.10.50 (v2.10.0-766-ga43415ebfd-dirty)) if we write a lot of message into console (dmesg or dd if=/dev/zero of=/dev/ttyS1).
+
+/usr/local/bin/qemu-system-x86_64 -name guest=ultra1,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-27-ultra1/master-key.aes -machine pc-i440fx-2.8,accel=kvm,usb=off,dump-guest-core=off -cpu Skylake-Client,ds=on,acpi=on,ss=on,ht=on,tm=on,pbe=on,dtes64=on,monitor=on,ds_cpl=on,vmx=on,smx=on,est=on,tm2=on,xtpr=on,pdcm=on,osxsave=on,tsc_adjust=on,clflushopt=on,pdpe1gb=on -m 4096 -realtime mlock=off -smp 4,sockets=1,cores=4,threads=1 -uuid 4537ca29-73b2-40c3-9b43-666de182ba5f -display none -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-27-ultra1/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot strict=on -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x8.0x7 -drive file=/home/dzagorui/csr/csr_disk.qcow2,format=qcow2,if=none,id=drive-ide0-0-0 -device ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -netdev tap,fd=26,id=hostnet0 -device e1000,netdev=hostnet0,id=net0,mac=52:54:00:a9:4c:86,bus=pci.0,addr=0x3 -chardev socket,id=charserial0,host=127.0.0.1,port=4000,telnet,server,nowait -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charserial1,host=127.0.0.1,port=4001,telnet,server,nowait -device isa-serial,chardev=charserial1,id=serial1 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x2 -msg timestamp=on
+
+Use simpler setup for reproducing. 
+Was used only qemu-system-x86_64 (without using high-level wrappers and managers of virtual machines: libvirt, virsh, virt-install, virt-manager etc..). My setup with two consoles:
+
+/usr/local/bin/qemu-system-x86_64 -cpu host -enable-kvm -m 256 -smp 4 -kernel /home/dzagorui//bzImage -append 'root=/dev/ram0 loglevel=9 rw console=ttyS0' -initrd /home/dzagorui/initrd.cpio -display none -chardev socket,id=charserial0,host=127.0.0.1,port=4002,telnet,server,nowait -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charserial1,host=127.0.0.1,port=4003,telnet,server,nowait -device isa-serial,chardev=charserial1,id=serial1
+
+I noticed one thing, that -smp parameter affects this issue. When -smp 1 i can't reproduce this issue at all, when -smp 2 i can produce this issue only in second console (ttyS1), when -smp 4 and higher the issue produces on both consoles (ttyS1/ttyS0).
+My Host cpu i5-6200U has 2 cores and 4 threads.
+
+For reproducing was used this commands (no matter what console we use ttyS1 or ttyS0):
+#dmesg > /dev/ttyS*
+#dd if=/dev/zero of=/dev/ttyS*
+
+I'm seeing this on AWS EC2 when there's (apparently) high logging volume to the console, very similarly to https://www.reddit.com/r/sysadmin/comments/6zuqad/mongodb_aws_ec2_serial8250_too_much_work_for_irq4/
+
+On further investigation of my instance, there appeared to be no high logging volume to the console, nor anything using the /dev/ttyS0 other than agetty.  Switching from the generic kernel to the AWS kernel seems to have stabilised it. 
+
+Further update: AWS kernel experienced the same error messages after just over 3 hours of runtime.
+
+The QEMU project is currently considering to move its bug tracking to another system. For this we need to know which bugs are still valid and which could be closed already. Thus we are setting older bugs to "Incomplete" now.
+If you still think this bug report here is valid, then please switch the state back to "New" within the next 60 days, otherwise this report will be marked as "Expired". Or mark it as "Fix Released" if the problem has been solved with a newer version of QEMU already. Thank you and sorry for the inconvenience.
+
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/108/other/1719870 b/results/classifier/108/other/1719870
new file mode 100644
index 000000000..974c7b214
--- /dev/null
+++ b/results/classifier/108/other/1719870
@@ -0,0 +1,102 @@
+other: 0.970
+permissions: 0.948
+performance: 0.943
+device: 0.934
+semantic: 0.931
+graphic: 0.917
+socket: 0.911
+debug: 0.900
+boot: 0.898
+files: 0.884
+PID: 0.878
+network: 0.859
+vnc: 0.721
+KVM: 0.687
+
+Converting 100G VHDX fixed image to QCOW2 fails
+
+Virtual Size recognized incorrectly for VHDX fixed disk and conversion fails with error Expression: !qiov || bytes == qiov->size
+
+
+
+PS > & 'C:\Program Files\qemu\qemu-img.exe' --version
+qemu-img version 2.10.0 (v2.10.0-11669-g579e69bd5b-dirty)
+Copyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers
+
+Command capture,
+
+PS > & 'C:\Program Files\qemu\qemu-img.exe' --version
+qemu-img version 2.10.0 (v2.10.0-11669-g579e69bd5b-dirty)
+Copyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers
+
+PS > Get-VHD .\VM.vhdx
+ComputerName            : Server1
+Path                    : \VM.vhdx
+VhdFormat               : VHDX
+VhdType                 : Fixed
+FileSize                : 107378376704
+Size                    : 107374182400
+MinimumSize             : 107374182400
+LogicalSectorSize       : 4096
+PhysicalSectorSize      : 4096
+BlockSize               : 0
+ParentPath              :
+DiskIdentifier          : 53fd4aa7-562e-4bed-bc1c-2db71222e07e
+FragmentationPercentage : 0
+Alignment               : 1
+Attached                : False
+DiskNumber              :
+Key                     :
+IsDeleted               : False
+Number                  :
+PS > & 'C:\Program Files\qemu\qemu-img.exe' convert -O qcow2 .\VM.vhdx .\VM.qcow2
+Assertion failed!
+Program: C:\Program Files\qemu\qemu-img.exe
+File: /home/stefan/src/qemu/repo.or.cz/qemu/ar7/block/io.c, Line 1034
+Expression: !qiov || bytes == qiov->size
+PS > & 'C:\Program Files\qemu\qemu-img.exe' info .\VM.qcow2
+image: .\VM.qcow2
+file format: qcow2
+virtual size: 13G (13421772800 bytes)
+disk size: 1.4G
+cluster_size: 65536
+Format specific information:
+    compat: 1.1
+    lazy refcounts: false
+    refcount bits: 16
+    corrupt: false
+
+Bug is reproducible with setting VHDX Fixed Logicalsectorsize to 4096bytes
+>10G image created reflects as 1.2G virtual size in Qemu-img
+PS F:\> new-vhd -path test.vhdx -BlockSizeBytes 134217728 -SizeBytes 10737418240 -Fixed -LogicalSectorSizeBytes 4096
+ComputerName            : XXXX
+Path                    : F:\test.vhdx
+VhdFormat               : VHDX
+VhdType                 : Fixed
+FileSize                : 10741612544
+Size                    : 10737418240
+MinimumSize             :
+LogicalSectorSize       : 4096
+PhysicalSectorSize      : 4096
+BlockSize               : 0
+ParentPath              :
+DiskIdentifier          : dfa84293-86f2-4ddf-aaff-14c04dae5df9
+FragmentationPercentage : 0
+Alignment               : 1
+Attached                : False
+DiskNumber              :
+Key                     :
+IsDeleted               : False
+Number                  :
+PS F:\> C:\temp\qemu-img\qemu-img.exe info .\test.vhdx
+image: .\test.vhdx
+file format: vhdx
+virtual size: 1.2G (1342177280 bytes)
+disk size: 10G
+cluster_size: 134217728
+
+Looking through old bug tickets... can you still reproduce this issue with the latest version of QEMU? Or could we close this ticket nowadays?
+
+
+[Expired for QEMU because there has been no activity for 60 days.]
+
diff --git a/results/classifier/108/other/1719984 b/results/classifier/108/other/1719984
new file mode 100644
index 000000000..52cf7e734
--- /dev/null
+++ b/results/classifier/108/other/1719984
@@ -0,0 +1,29 @@
+graphic: 0.887
+device: 0.786
+network: 0.762
+semantic: 0.650
+boot: 0.624
+performance: 0.608
+socket: 0.597
+vnc: 0.595
+permissions: 0.511
+PID: 0.511
+files: 0.420
+KVM: 0.312
+other: 0.185
+debug: 0.183
+
+wrgsbase misemulated in x86_64-softmmu
+
+qemu revision: cfe4cade054c0e0d00d0185cdc433a9e3ce3e2e4
+command: ./qemu-system-x86_64 -m 2048 -nographic -net none -smp 4,threads=2 -machine q35 -kernel zircon.bin -cpu Haswell,+smap,-check -initrd bootdata.bin -append 'TERM=screen kernel.halt-on-panic=true '
+
+On this revision, the VM reports CPUID.07H.0H.EBX[0] = 1.  In this VM, with CR4[16] set to 1, wrgsbase triggers #UD, which mismatches the behavior described in Intel's instruction reference.
+
+For further data, the faulting instruction is
+f3 48 0f ae df          wrgsbase %rdi
+
+Fix is in staging: https://github.com/ehabkost/qemu/commit/cdcc80d41360e278b45c91de29a29d797264e85d
+
+Fix is in master: https://github.com/qemu/qemu/commit/e0dd5fd41a1a38766009f442967fab700d2d0550
+