diff options
Diffstat (limited to '')
| -rw-r--r-- | results/classifier/108/other/1853 | 16 | ||||
| -rw-r--r-- | results/classifier/108/other/1853042 | 352 | ||||
| -rw-r--r-- | results/classifier/108/other/1853083 | 167 | ||||
| -rw-r--r-- | results/classifier/108/other/1853429 | 43 | ||||
| -rw-r--r-- | results/classifier/108/other/1853826 | 446 | ||||
| -rw-r--r-- | results/classifier/108/other/1853898 | 55 |
6 files changed, 1079 insertions, 0 deletions
diff --git a/results/classifier/108/other/1853 b/results/classifier/108/other/1853 new file mode 100644 index 000000000..af794fe78 --- /dev/null +++ b/results/classifier/108/other/1853 @@ -0,0 +1,16 @@ +device: 0.846 +network: 0.537 +debug: 0.483 +graphic: 0.407 +boot: 0.379 +performance: 0.294 +other: 0.213 +semantic: 0.206 +socket: 0.194 +permissions: 0.138 +files: 0.138 +vnc: 0.115 +PID: 0.046 +KVM: 0.001 + +Errors when install QEMU from source code diff --git a/results/classifier/108/other/1853042 b/results/classifier/108/other/1853042 new file mode 100644 index 000000000..0d6c41ff4 --- /dev/null +++ b/results/classifier/108/other/1853042 @@ -0,0 +1,352 @@ +debug: 0.883 +other: 0.873 +graphic: 0.868 +semantic: 0.866 +performance: 0.860 +device: 0.839 +permissions: 0.816 +vnc: 0.816 +socket: 0.807 +PID: 0.797 +files: 0.780 +network: 0.778 +KVM: 0.777 +boot: 0.752 + +Ubuntu 18.04 - vm disk i/o performance issue when using file system passthrough + +== Comment: #0 - I-HSIN CHUNG <email address hidden> - 2019-11-15 12:35:05 == +---Problem Description--- +Ubuntu 18.04 - vm disk i/o performance issue when using file system passthrough + +Contact Information = <email address hidden> + +---uname output--- +Linux css-host-22 4.15.0-1039-ibm-gt #41-Ubuntu SMP Wed Oct 2 10:52:25 UTC 2019 ppc64le ppc64le ppc64le GNU/Linux (host) Linux ubuntu 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 17:08:54 UTC 2019 ppc64le ppc64le ppc64le GNU/Linux (vm) + +Machine Type = p9/ac922 + +---Debugger--- +A debugger is not configured + +---Steps to Reproduce--- + 1. Env: Ubuntu 18.04.3 LTS?Genesis kernel linux-ibm-gt - 4.15.0-1039.41?qemu 1:2.11+dfsg-1ubuntu7.18 ibmcloud0.3 or 1:2.11+dfsg-1ubuntu7.19 ibm-cloud1?fio-3.15-4-g029b + +2. execute run.sh to run fio benchmark: + +2.1) run.sh: +#!/bin/bash + +for bs in 4k 16m +do + +for rwmixread in 0 25 50 75 100 +do + +for numjobs in 1 4 16 64 +do +echo ./fio j1.txt --bs=$bs --rwmixread=$rwmixread --numjobs=$numjobs +./fio j1.txt --bs=$bs --rwmixread=$rwmixread --numjobs=$numjobs + +done +done +done + +2.2) j1.txt: + +[global] +direct=1 +rw=randrw +refill_buffers +norandommap +randrepeat=0 +ioengine=libaio +iodepth=64 +runtime=60 + +allow_mounted_write=1 + +[job2] +new_group +filename=/dev/vdb +filesize=1000g +cpus_allowed=0-63 +numa_cpu_nodes=0 +numa_mem_policy=bind:0 + +3. performance profile: +device passthrough performance for the nvme: +http://css-host-22.watson.ibm.com/rundir/nvme_vm_perf_vm/20191011-112156/html/#/measurement/vm/ubuntu (I/O bandwidth achieved inside VM in GB/s range) + +file system passthrough +http://css-host-22.watson.ibm.com/rundir/nvme_vm_perf_vm/20191106-123613/html/#/measurement/vm/ubuntu (I/o bandwidth achieved inside the VM is very low) + +desired performance when using file system passthrough should be similar to the device passthrough + +Userspace tool common name: fio + +The userspace tool has the following bit modes: should be 64 bit + +Userspace rpm: ? + +Userspace tool obtained from project website: na + +*Additional Instructions for <email address hidden>: +-Post a private note with access information to the machine that the bug is occuring on. +-Attach ltrace and strace of userspace application. + +Hi, +Let me provide my expectations (which are different): +You said "desired performance when using file system passthrough should be similar to the device passthrough" +IMHO that isn’t right - it might be "desired" but unrealistic to "be expected" + +Usually you have a hierarchy: +1. device passthrough +2. using block devices +3. using images on Host Filesystem +4. using images on semi-remote cluster filesystems +(and a few special cases in between) + +Those usually from 1->4 are decreasing performance but increasing flexibility and manageability. + +So I wanted to give a heads up based on the initial report that eventually this might end up as "please adjust expectations" + +--- + +That said, let’s focus on what your setup actually looks like and if there are obvious improvements or hidden bugs. +Unfortunately "file system passthrough" isn't a clearly defined thing. +Could you: +1) outline which disk storage you attached to the host +2) which filesystem is on that storage +3) how you are passing files and/or images to the guest + +Please explain the questions above and attach libvirts guest xml of both of your test cases. + +P.S. nvme passthrough will soon become even faster on ppc64el due to the fix for bug LP 1847948 + +------- Comment From <email address hidden> 2019-11-19 10:40 EDT------- +1) nvme installed inside p9/ac992 +2) File system pass through +# cd /nvme0 +# qemu-img create -f qcow2 nvme1.img 3GFormatting ?nvme1.img?, fmt=qcow2 size=3221225472 cluster_size=65536 lazy_refcounts=off refcount_bits=16 +# virsh attach-disk test_vm /nvme0/nvme1.img vdb --driver qemu --subdriver qcow2 --targetbus virtio --persistentDisk attached successfully + +# virsh detach-disk test_vm /nvme0/nvme1.imgDisk detached successfully + +Thanks for confirming the setup that I already have assumed. + +This will naturally be slower for having: +a) overhead by the host filesystem +b) overhead by qcow metadata handling +c) exits to the host for the I/O (biggest single element of these) +d) less concurrency as the defaults for queue count and depths +e) with the PT case you do real direct I/O write, while the other probably caches in the host which most of the time is bad. + +I can help you to optimize the tunables for the image that you attach if you want that. +That will make it slightly better than it is, but clearly it will never reach the performance of the nvme-passthrough. + +Let me know if you want some general tuning guidance on those or not. + +Hi, Christian. + +It would be great if you could share your knowledge on this. + +Thank you! + +Murilo + +Hi, +a little personal disclaimer. +This might not be perfect as I don't know your case in detail enough. +And also in general there is always "one more tuning" that you can tune :-) + +An example of an alternative might be to partition the nvme and pass the partitions as block devices => less flexible for live cycle management, but usually much faster - it is up to your needs entirely. +All these things will come at a tradeoff like overhead / features / affecting different workload patterns differently. + +For now my suggestion here tries to stick to the qcow2 image on FS option that you have chosen and just tune that one. + +Lets summarize the benefits of NVME passthrough to images on host FS +- lower latency +- more and deeper queues +- no host caching adding overhead +- features +- less overhead +- ... + +Lets take care about a few of those ... +A disk by default will start like: + + <disk type='file' device='disk'> + <driver name='qemu' type='qcow2'/> + <source file='/var/lib/uvtool/libvirt/images/yourdisk.qcow'/> + <target dev='vdb' bus='virtio'/> + </disk> + +Your's will most likely look very similar. +Note I prefer an XML compared to the command line options as there are more features and it is more easy to document and audit later on (even if generated). + + +#1 Queues +By default a virtio block device has only one queue which is enough in many cases. +But on huge systems with massively parallel work the might not be. +Add something like - queues='8' - to your driver section. + +example: +<driver name='qemu' type='qcow2' queues='8'/> + +#2 Caching +Not always, but often for high speed I/O host caching is a drawback. +You can disable that via - cache='none' - in your driver section. + +example: +<driver name='qemu' type='qcow2' queues='8' cache='none'/> + +#3 Features +NVME disks after all are flash and you'd want to make sure that it learns about freed space (discard) in the long run. virtio-blk won't transfer those, but virtio-scsi will. + <controller type='scsi' index='0' model='virtio-scsi'> + <address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/> + </controller> + +Combining that with the above needs to adapt slightly as with virtio-scsi the controller has the queues, so queues='x' moves here. +E.g. read https://mpolednik.github.io/2017/01/23/virtio-blk-vs-virtio-scsi/ and links from there. + + +#4 IOThreads +By default your VM might have not enough I/O threads. +If you have multiple high performing elements you might want to assign them an individual one. +You'll need to allocate more and then assign more to your disk(controller). +Again this depends on your setup, and here I only outline how to unblock multiple disks from each others iothread. +1. in the domain section the threads + <iothreads>4</iothreads> +2. if you use virtio-scsi as above you assign iothreads at the controller level and might therefore want one controller per disk (in other cases you might assign the disk) + +(assign separate threads to each controller and have each disk have its own controller) + + + +--- + +Overall modified from the initial example: + + <domain> + ... + <iothreads>4</iothreads> + <devices> + ... + <controller type='scsi' index='0' model='virtio-scsi'> + <driver queues='8' iothread='3'/> + </controller> + <disk type='file' device='disk'> + <driver name='qemu' type='qcow2' cache='none'/> + <source file='/var/lib/uvtool/libvirt/images/yourdisk.qcow'/> + <target dev='sda' bus='scsi'/> + <address type='drive' controller='0' bus='0' target='0' unit='0'/> + </disk> + +--- + +Some choices of the setup e.g. using qcow files - will limit some of the benefits above. +At least in the past emulation limited the concurrency of actions here as well as passthrough of discard might be less effective. +Raw files and even more so partitions will be much faster - but as I said all is up to your specific needs. + +You might need to evaluate all the alternatives between qcow files and NVME-passthrough (which are like polar opposite) and rethink what you need in terms of features/performance and then pick YOUR tradeoff. + + +You'll find generic documentation about what I used above at: +https://libvirt.org/formatdomain.html#elementsDisks +https://libvirt.org/formatdomain.html#elementsIOThreadsAllocation + +As you see this is only the tip of the iceberg :-) + +------- Comment From <email address hidden> 2019-11-20 10:28 EDT------- +Hello, + +Thanks very much for the advice. I will try those settings as suggested. + +Our high-level goal is to provide a solution to enable NVMe as the local storage in the cloud node shared by multiple VMs. This would require 1) allocated storage space for different VMs with (security) isolation 2) I/O bandwidth performance with QoS. + +For high overhead of file system passthrough, is it possible to profile the performance overhead? + +Thanks again. + +I-Hsin + +As I said, you have to make your own tradeoff choices. +I still don't know enough of your needs, but from the bit I heard you can run e.g. an LVM on the NVME in the host and provision "disks" from the volume group on it (on many) which will be less flexible (but hey, still even has snapshots) but faster than images-on-FS while at the same time being more flexible than "just" partitions. + +Disk PT <-> partitions <-> LVM <-> images on FS +Speed <-> Features +(many suboptions in between) + +Sorry, but from here those are decisions you have to make :-/ + +About profiling, that is possible, but not helpful and in the worst case will add additional latency. Your initial decision which block architecture to use will make 95% of the perf/feature tradeoff. Later tuning profiling can usually only gain the remaining 5%, so I'd recommend to not focus on that part. + +P.S. This bug turns into consulting which I sometimes like (performance :-) ) but looking at my todo list I have other important things to do - sorry. I hope my guidance helped you to make better choices! + +Closing based on the assumption that Christian's response in comment #7 answered the relevant questions. + +------- Comment From <email address hidden> 2019-12-16 15:02 EDT------- +some summary we have so far + +css-host-22 is p9/ac922 and css-host-130 is x86/hgx1 + +1. Device pass through + +P9/x86 similar performance +Bare metal / VM similar performance +0.5GB/s to 1 GB/s for writes +Up to 4-5 GB/s for reads +Millions of int/csw per seconds for 4k page size and much smaller for 16 MB page size + +profiles + +Bare Metal +http://css-host-22.watson.ibm.com/rundir/nvme_bm_perf_on_bm/20191004-103224/html/#/measurement/nvme_bm_perf_bm/css-host-22 +http://css-host-130.watson.ibm.com/rundir/nvme_bm_perf_on_bm/20191210-132738/html/#/measurement/nvme_bm_perf_bm/css-host-130 +Virtual Machine +http://css-host-22.watson.ibm.com/rundir/nvme_vm_perf_vm/20191011-112156/html/#/measurement/vm/ubuntu +http://css-host-130.watson.ibm.com/rundir/nvme_vm_perf_vm/20191210-223913/html/#/measurement/vm/ubuntu + +2. file system pass through + +P9/x86 similar performance +I/O bw achieved with 16MB page size, but very low I/O bw achieved with 4k page size +CPU utilization is much lower on P9/AC922 +On x86, CPU is waiting for some process to get to it (50% for qcow2 and 35% for raw) +Int/csw +Higher on x86 +Higher with 4k page size +Higher with raw + +profiles + +qcow2 +http://css-host-130.watson.ibm.com/rundir/nvme_vm_perf_vm_qcow2/20191211-232233/html/#/measurement/vm/ubuntu +http://css-host-22.watson.ibm.com/rundir/nvme_vm_perf_vm_qcow2/20191211-233729/html/#/measurement/vm/ubuntu +raw +http://css-host-130.watson.ibm.com/rundir/nvme_vm_perf_vm_raw/20191212-082717/html/#/measurement/vm/ubuntu +http://css-host-22.watson.ibm.com/rundir/nvme_vm_perf_vm_raw/20191212-082810/html/#/measurement/vm/ubuntu + +3. questions + +a. what is the diff between qcow2 and raw? how is that contributing to the cpu utilization? + +b. I have trouble configure scsci controller when I try to following the instructions: + +<controller type='scsi' index='0' model='virtio-scsi'> +<address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/> +</controller> + +it gave me the argument is not available error + +c. for file system pass through, How to get better 4k performance? + +@IBM + +this bug report is closed as invalid in Launchpad, as it is a discussion / perf tuning, rather than a bug report. And we have provided extension consultation about it. + +Can you please turn off bugproxy comment syncing, if you wish to continue commenting on this LTC ticket internally? + +Please note in Launchpad this bug report, and comments are open and public. + diff --git a/results/classifier/108/other/1853083 b/results/classifier/108/other/1853083 new file mode 100644 index 000000000..2e3dc27e2 --- /dev/null +++ b/results/classifier/108/other/1853083 @@ -0,0 +1,167 @@ +permissions: 0.833 +semantic: 0.818 +other: 0.815 +graphic: 0.810 +debug: 0.781 +performance: 0.777 +device: 0.777 +boot: 0.748 +network: 0.737 +PID: 0.723 +vnc: 0.700 +socket: 0.670 +files: 0.610 +KVM: 0.426 + +qemu ppc64 4.0 boot AIX5.1 hung + +When boot AIX5.1 from cdrom device, qemu hung there, no further info is displayed and cpu consumption is high. + +Did this ever worked? + +No, this happened when I tried to install the AIX5.1 on qemu ppc64. + +No, I don't think that these old AIX versions ever worked in QEMU. You might be more or less lucky with later versions, though, see e.g.: + + https://bugs.launchpad.net/qemu/+bug/1829682 + +Anyway, when reporting bugs, please always provide the command line that you used to start QEMU, otherwise bugs are hardly reproducible. + +What I don't understand is ppc64 for IBM machine emulation, but qemu ppc64 can't support AIX most of the time, but can support Linux on power very well. + +I'm running this to start the AIX5.1 installation on qemu: +#!/bin/bash +qemu-system-ppc64 -cpu POWER8 -machine pseries -m 2048 -serial mon:stdio -hda aix-hdd.qcow2 -cdrom /Download/AIX5.1/VOLUME1.iso -prom-env boot-command='boot cdrom: -s verbose' + + +and it got: +[root@192 emu]# ./aix51 +VNC server running on ::1:5900 +qemu-system-ppc64: warning: TCG doesn't support requested feature, cap-cfpc=workaround +qemu-system-ppc64: warning: TCG doesn't support requested feature, cap-sbbc=workaround +qemu-system-ppc64: warning: TCG doesn't support requested feature, cap-ibs=workaround + + +SLOF ********************************************************************** +QEMU Starting + Build Date = Jul 3 2019 12:26:14 + FW Version = git-ba1ab360eebe6338 + Press "s" to enter Open Firmware. + +Populating /vdevice methods +Populating /vdevice/vty@71000000 +Populating /vdevice/nvram@71000001 +Populating /vdevice/l-lan@71000002 +Populating /vdevice/v-scsi@71000003 + SCSI: Looking for devices + 8000000000000000 DISK : "QEMU QEMU HARDDISK 2.5+" + 8200000000000000 CD-ROM : "QEMU QEMU CD-ROM 2.5+" +Populating /pci@800000020000000 + 00 0000 (D) : 1234 1111 qemu vga + 00 0800 (D) : 1033 0194 serial bus [ usb-xhci ] +Installing QEMU fb + + + +Scanning USB + XHCI: Initializing + USB Keyboard + USB mouse +No console specified using screen & keyboard + + Welcome to Open Firmware + + Copyright (c) 2004, 2017 IBM Corporation All rights reserved. + This program and the accompanying materials are made available + under the terms of the BSD License available at + http://www.opensource.org/licenses/bsd-license.php + + +Trying to load: -s verbose from: /vdevice/v-scsi@71000003/disk@8200000000000000: ... + +and just hung there, took lots of CPU time, never proceed further. + + +AIX 5.1 is quite a bit older than POWER8, so I don't think that it will run with this processor anymore. You could try "power5" or "970fx" as CPU (maybe even the "40p" machine instead of "pseries"), but I guess it won't make a big difference - the QEMU pseries machine has been written for later operating systems in mind, there was never a big effort to get older operating systems running with it. + +Tried POWER5, but got +[root@192 emu]# ./aix51 +qemu-system-ppc64: unable to find CPU model 'POWER5' + + +[root@192 emu]# qemu-system-ppc64 --version +QEMU emulator version 4.1.0 +Copyright (c) 2003-2019 Fabrice Bellard and the QEMU Project developers +[root@192 emu]# + + +With +qemu-system-ppc64 -cpu power5+ -machine pseries -m 2048 -serial mon:stdio -hda aix-hdd.qcow2 -cdrom /Download/AIX5.1/VOLUME1.iso -prom-env boot-command='boot cdrom: -s verbose' + +got: +VNC server running on ::1:5900 + + +SLOF ********************************************************************** +QEMU Starting + Build Date = Jul 3 2019 12:26:14 + FW Version = git-ba1ab360eebe6338 + Press "s" to enter Open Firmware. + +Populating /vdevice methods +Populating /vdevice/vty@71000000 +Populating /vdevice/nvram@71000001 +Populating /vdevice/l-lan@71000002 +Populating /vdevice/v-scsi@71000003 + SCSI: Looking for devices + 8000000000000000 DISK : "QEMU QEMU HARDDISK 2.5+" + 8200000000000000 CD-ROM : "QEMU QEMU CD-ROM 2.5+" +Populating /pci@800000020000000 + 00 0000 (D) : 1234 1111 qemu vga + 00 0800 (D) : 1033 0194 serial bus [ usb-xhci ] +Installing QEMU fb + + + +Scanning USB + XHCI: Initializing + + +( 700 ) Program Exception [ fff ] + + + R0 .. R7 R8 .. R15 R16 .. R23 R24 .. R31 +000000007dbf36f4 000000007e4594a8 0000000000000000 000000007dc06400 +000000007e669dc0 0000000000000100 0000000000000000 000000007dc0ae70 +000000007dc10700 000000007e45c000 000000007e466010 000000007e459488 +000000007e45c000 000000007dc50700 000000007dc0b040 0000200081021000 +0000000000000000 000000007e436000 0000000000008000 0000200081020040 +0000000000000fff 0000000000000000 000000000000f003 0000200081020070 +000000007e466008 0000000000000000 0000000000000006 0000000000000002 +0000000001180000 0000000000000000 000000007e66a050 000000007e459488 + + CR / XER LR / CTR SRR0 / SRR1 DAR / DSISR + 80000408 000000007dbf3650 000000007dbf366c 0000000000000000 +0000000000000000 0000000000000000 8000000000080000 00000000 + + +1 > + +Answering comment #4: +> What I don't understand is ppc64 for IBM machine emulation, but qemu ppc64 +> can't support AIX most of the time, but can support Linux on power very well. + +QEMU doesn't implement the full PAPR specification. Historically we've only +added the bits that are essential for a Linux guest to be happy. + +AIX 5.1 is fairly old and neither IBM, nor the QEMU community invested time +and effort in getting it to work under QEMU. AIX being a closed source OS +certainly didn't help things to go forward. + +Things have changed recently though. IBM added virtio drivers and some +workarounds to AIX, as well some fixes to QEMU. Latest AIX 7.2 releases +should now be able to run under QEMU with a POWER8 or newer CPU model. + + +[Expired for QEMU because there has been no activity for 60 days.] + diff --git a/results/classifier/108/other/1853429 b/results/classifier/108/other/1853429 new file mode 100644 index 000000000..f72bb512f --- /dev/null +++ b/results/classifier/108/other/1853429 @@ -0,0 +1,43 @@ +boot: 0.772 +device: 0.746 +performance: 0.693 +other: 0.670 +graphic: 0.650 +permissions: 0.601 +network: 0.545 +vnc: 0.529 +semantic: 0.528 +KVM: 0.520 +socket: 0.483 +PID: 0.480 +files: 0.448 +debug: 0.385 + +qemu-kvm on aarch64 attach volume failed when vm is booting + +I use libvirt and qemu-kvm on aarch64 platform to attach volume to a booting vm,when the system of vm has not boot up, attach volume will be failed, after vm system booting up, attach volume can be successful. + +libvirt: 4.5.0 +qemu : 2.12.0 + +console log and qemu command of vm is in attachment. + + + + + +The QEMU project is currently considering to move its bug tracking to +another system. For this we need to know which bugs are still valid +and which could be closed already. Thus we are setting older bugs to +"Incomplete" now. + +If you still think this bug report here is valid, then please switch +the state back to "New" within the next 60 days, otherwise this report +will be marked as "Expired". Or please mark it as "Fix Released" if +the problem has been solved with a newer version of QEMU already. + +Thank you and sorry for the inconvenience. + + +[Expired for QEMU because there has been no activity for 60 days.] + diff --git a/results/classifier/108/other/1853826 b/results/classifier/108/other/1853826 new file mode 100644 index 000000000..28be04abd --- /dev/null +++ b/results/classifier/108/other/1853826 @@ -0,0 +1,446 @@ +other: 0.815 +permissions: 0.763 +device: 0.718 +semantic: 0.717 +performance: 0.690 +debug: 0.647 +KVM: 0.618 +files: 0.583 +graphic: 0.577 +socket: 0.570 +network: 0.561 +boot: 0.515 +PID: 0.498 +vnc: 0.484 + +ELF loader fails to load shared object on ThunderX2 running RHEL7 + +Simple test: +hello.c + +include <stdio.h> + +int main(int argc, char* argv[]) +{ + { + printf("Hello World... \n"); + } + return 0; +} + +when compiled with : +*Compiler +https://developer.arm.com/tools-and-software/server-and-hpc/arm-architecture-tools/arm-allinea-studio/download +Arm-Compiler-for-HPC_19.3_RHEL_7_aarch64.tar + +*Running: +1) with -armpl + armclang -armpl hello.c + ./qemu/build/aarch64-linux-user/qemu-aarch64 a.out +2) without flag + armclang hello.c + ./qemu/build/aarch64-linux-user/qemu-aarch64 a.out + +•With Docker image: + CentOS Linux release 7.7.1908 (AltArch) + +*Two different machines: + AArch64, Taishan. tsv110, Kunpeng 920, ARMv8.2-A + AArch64, Taishan 2280, Cortex-A72, ARMv8-A + +*QEMU 4.0 + qemu-aarch64 version 4.1.91 (v4.2.0-rc1) + + +Results: + + + ****Taishan 2280 Cortex-A72 + Running +1)with -armpl flag with and without the docker + WORKS-> Hello World... + -> ldd a.out +ldd a.out +linux-vdso.so.1 => (0x0000ffffbc6a2000) +libamath_generic.so => /scratch/arm-linux-compiler-19.3_Generic-AArch64_RHEL-8_aarch64-linux/lib/clang/9.0.1/armpl_links/lib/libamath_generic.so (0x0000ffffbc544000) +libm.so.6 => /lib64/libm.so.6 (0x0000ffffbc493000) +libastring_generic.so => /scratch/arm-linux-compiler-19.3_Generic-AArch64_RHEL-8_aarch64-linux/lib/clang/9.0.1/armpl_links/lib/libastring_generic.so (0x0000ffffbc472000) libarmflang.so => /scratch/arm-linux-compiler-19.3_Generic-AArch64_RHEL-8_aarch64-linux/lib/libarmflang.so (0x0000ffffbbfd3000) +libomp.so => /scratch/arm-linux-compiler-19.3_Generic-AArch64_RHEL-8_aarch64-linux/lib/libomp.so (0x0000ffffbbef5000) +librt.so.1 => /lib64/librt.so.1 (0x0000ffffbbed4000) +libpthread.so.0 => /lib64/libpthread.so.0 (0x0000ffffbbe9f000) +libarmpl_lp64_generic.so => /scratch/arm-linux-compiler-19.3_Generic-AArch64_RHEL-8_aarch64-linux/lib/clang/9.0.1/armpl_links/lib/libarmpl_lp64_generic.so (0x0000ffffb3306000) +libc.so.6 => /lib64/libc.so.6 (0x0000ffffb3180000) +libstdc++.so.6 => /scratch/gcc-9.2.0_Generic-AArch64_RHEL-8_aarch64-linux/lib64/libstdc++.so.6 (0x0000ffffb2f30000) +libgcc_s.so.1 => /scratch/gcc-9.2.0_Generic-AArch64_RHEL-8_aarch64-linux/lib64/libgcc_s.so.1 (0x0000ffffb2eff000) +libdl.so.2 => /lib64/libdl.so.2 (0x0000ffffb2ede000) +/lib/ld-linux-aarch64.so.1 (0x0000ffffbc674000) + + +Running +2) without -armpl flag with and without the docker + WORKS -> Hello World... + -> ldd a.out +ldd a.out + linux-vdso.so.1 => (0x0000ffffa6895000) +libastring_generic.so => /scratch/arm-linux-compiler-19.3_Generic-AArch64_RHEL-8_aarch64-linux/lib/clang/9.0.1/armpl_links/lib/libastring_generic.so (0x0000ffffa6846000) +libc.so.6 => /lib64/libc.so.6 (0x0000ffffa66c0000) +/lib/ld-linux-aarch64.so.1 (0x0000ffffa6867000) + + +****Taishan - tsv110 Kunpeng 920 + For Running + +1)with -armpl flag with and without the docker + DOES NOT WORK -> with and without Docker + -> It shows : qemu:handle_cpu_signal received signal outside vCPU + context @ pc=0xffffaaa8844a + -> ldd a.out +ldd a.out +linux-vdso.so.1 => (0x0000ffffad4b0000) +libamath_generic.so => /scratch/arm-linux-compiler-19.3_Generic-AArch64_RHEL-8_aarch64-linux/lib/clang/9.0.1/armpl_links/lib/libamath_generic.so (0x0000ffffad370000) +libm.so.6 => /lib64/libm.so.6 (0x0000ffffad2a0000) +libastring_generic.so => /scratch/arm-linux-compiler-19.3_Generic-AArch64_RHEL-8_aarch64-linux/lib/clang/9.0.1/armpl_links/lib/libastring_generic.so (0x0000ffffad270000) libarmflang.so => /scratch/arm-linux-compiler-19.3_Generic-AArch64_RHEL-8_aarch64-linux/lib/libarmflang.so (0x0000ffffacdd0000) +libomp.so => /scratch/arm-linux-compiler-19.3_Generic-AArch64_RHEL-8_aarch64-linux/lib/libomp.so (0x0000ffffaccf0000) +librt.so.1 => /lib64/librt.so.1 (0x0000ffffaccc0000) +libpthread.so.0 => /lib64/libpthread.so.0 (0x0000ffffacc80000) +libarmpl_lp64_generic.so => /scratch/arm-linux-compiler-19.3_Generic-AArch64_RHEL-8_aarch64-linux/lib/clang/9.0.1/armpl_links/lib/libarmpl_lp64_generic.so (0x0000ffffa40e0000) +libc.so.6 => /lib64/libc.so.6 (0x0000ffffa3f50000) +libstdc++.so.6 => /scratch/gcc-9.2.0_Generic-AArch64_RHEL-8_aarch64-linux/lib64/libstdc++.so.6 (0x0000ffffa3d00000) +libgcc_s.so.1 => /scratch/gcc-9.2.0_Generic-AArch64_RHEL-8_aarch64-linux/lib64/libgcc_s.so.1 (0x0000ffffa3cc0000) +libdl.so.2 => /lib64/libdl.so.2 (0x0000ffffa3c90000) +/lib/ld-linux-aarch64.so.1 (0x0000ffffad4c0000) + + +Running +2) without -armpl flag with and without the docker + WORKS -> Hello World.. + -> ldd a.out +ldd a.out +linux-vdso.so.1 => (0x0000ffff880c0000) +libastring_generic.so => /scratch/arm-linux-compiler-19.3_Generic-AArch64_RHEL-8_aarch64-linux/lib/clang/9.0.1/armpl_links/lib/libastring_generic.so (0x0000ffff88080000) +libc.so.6 => /lib64/libc.so.6 (0x0000ffff87ee0000) +/lib/ld-linux-aarch64.so.1 (0x0000ffff880d0000) + +Could you invoke one of the failing and passing cases with -d page and post the results please. + + + ****Taishan 2280 Cortex-A72 + Running +1)with -armpl flag with and without the docker + armclang -armpl hello.c + ./qemu/build/aarch64-linux-user/qemu-aarch64 -d page a.out + +host mmap_min_addr=0x8000 +Reserved 0x21000 bytes of guest address space +Relocating guest address space from 0x0000000000400000 to 0x288a000 +guest_base 0x248a000 +start end size prot +0000000000400000-0000000000401000 0000000000001000 r-x +000000000041f000-0000000000421000 0000000000002000 rw- +0000004000000000-0000004000001000 0000000000001000 --- +0000004000001000-0000004000801000 0000000000800000 rw- +0000004000801000-000000400081f000 000000000001e000 r-x +000000400081f000-0000004000830000 0000000000011000 --- +0000004000830000-0000004000833000 0000000000003000 rw- +start_brk 0x0000000000000000 +end_code 0x00000000004009f4 +start_code 0x0000000000400000 +start_data 0x000000000041fd68 +end_data 0x0000000000420024 +start_stack 0x0000004000800510 +brk 0x0000000000420028 +entry 0x00000040008020e0 +argv_start 0x0000004000800518 +env_start 0x0000004000800528 +auxv_start 0x0000004000800588 +Hello World... + +****Taishan - tsv110 Kunpeng 920 + For Running + +1)with -armpl flag with and without the docker + armclang -armpl hello.c + ./qemu/build/aarch64-linux-user/qemu-aarch64 -d page a.out + +host mmap_min_addr=0x1000 +Reserved 0x30000 bytes of guest address space +Relocating guest address space from 0x0000000000400000 to 0x2890000 +guest_base 0x2490000 +start end size prot +0000000000400000-0000000000410000 0000000000010000 r-x +0000000000410000-0000000000430000 0000000000020000 rw- +0000004000000000-0000004000010000 0000000000010000 --- +0000004000010000-0000004000810000 0000000000800000 rw- +0000004000810000-0000004000830000 0000000000020000 r-x +0000004000830000-0000004000850000 0000000000020000 rw- +start_brk 0x0000000000000000 +end_code 0x00000000004009f4 +start_code 0x0000000000400000 +start_data 0x000000000041fd68 +end_data 0x0000000000420024 +start_stack 0x000000400080f560 +brk 0x0000000000420028 +entry 0x00000040008110e0 +argv_start 0x000000400080f568 +env_start 0x000000400080f578 +auxv_start 0x000000400080f5d8 +qemu:handle_cpu_signal received signal outside vCPU context @ pc=0xffffb1938536 + +As it's taking longer to get the compiler up and running on my system could you attach the failing binary along with the extra .so libs from /scratch/arm-linux-compiler/* + +For info, a similar type of failure has been seen when loading libarmflang.so on DynamoRIO: +https://github.com/DynamoRIO/dynamorio/issues/3385 +It's to do with the .dynstr section being mapped incorrectly causing a SIGBUS. + +I've attempted to replicate but it works for me: + +16:55:37 [alex@idun:~/l/t/hello-armpl] $ ~/lsrc/qemu.git/builds/all/aarch64-linux-user/qemu-aarch64 ./hello-armpl +Hello World... +16:55:52 [alex@idun:~/l/t/hello-armpl] $ ldd ./hello-armpl + linux-vdso.so.1 (0x0000ffffb9e78000) + libamath_generic.so => /home/alex/lsrc/tests/hello-armpl/libamath_generic.so (0x0000ffffb9d1a000) + libm.so.6 => /lib64/libm.so.6 (0x0000ffffb9c50000) + libastring_generic.so => /home/alex/lsrc/tests/hello-armpl/libastring_generic.so (0x0000ffffb9c2f000) + libarmflang.so => /home/alex/lsrc/tests/hello-armpl/libarmflang.so (0x0000ffffb97b2000) + libomp.so => /home/alex/lsrc/tests/hello-armpl/libomp.so (0x0000ffffb96d4000) + librt.so.1 => /lib64/librt.so.1 (0x0000ffffb96bc000) + libpthread.so.0 => /lib64/libpthread.so.0 (0x0000ffffb968a000) + libarmpl_lp64_generic.so => /home/alex/lsrc/tests/hello-armpl/libarmpl_lp64_generic.so (0x0000ffffb0e12000) + libc.so.6 => /lib64/libc.so.6 (0x0000ffffb0c95000) + /lib/ld-linux-aarch64.so.1 => /lib64/ld-linux-aarch64.so.1 (0x0000ffffb9e4a000) + libstdc++.so.6 => /home/alex/lsrc/tests/hello-armpl/libstdc++.so.6 (0x0000ffffb0a9c000) + libgcc_s.so.1 => /home/alex/lsrc/tests/hello-armpl/libgcc_s.so.1 (0x0000ffffb0a6b000) + libdl.so.2 => /lib64/libdl.so.2 (0x0000ffffb0a57000) + + +Hi Alex, + +So, it works in some machines and others not. Mainly in machines with RHEL OS that we found the problem. +What is the OS you are using? + +This was on Aarch64 Ubuntu 18.04 - I don't have any RHEL machines around but if you send the ld.so along with the other libraries that won't matter in replicating the fault on my x86 host. + +IIRC RHEL uses 64k pages but Ubuntu does not -- maybe that is relevant ? Is the guest binary built for 4K or 64K pages? + + +Alex, + +Do you have the licence to run the compiler library? + + +I do have a ARM HPC compiler license which I assume includes the armpl blobs that came with it. You can email me directly at my Linaro email (<email address hidden>) if you don't want to upload the test case here. + +FWIW -p 65536 doesn't trigger anything although I wouldn't trust -p too much: + +env LD_LIBRARY_PATH=/opt/arm/armpl-19.3.0_ThunderX2CN99_Ubuntu-16.04_arm-hpc-compiler_19.3_aarch64-linux/lib/:/opt/arm/arm-hpc-compiler-19.3_Generic-AArch64_Ubuntu-16.04_aarch64-linux/lib/ ~/lsrc/qemu.git/aarch64-linux-user/qemu-aarch64 -p 65536 -d page ./hello-armpl +host mmap_min_addr=0x8000 +Reserved 0x20000 bytes of guest address space +Relocating guest address space from 0x0000000000400000 to 0x400000 +guest_base 0x0 +start end size prot +0000000000400000-0000000000403000 0000000000003000 r-x +0000000000410000-0000000000413000 0000000000003000 rw- +0000004000000000-0000004000001000 0000000000001000 --- +0000004000001000-0000004000801000 0000000000800000 rw- +0000004000810000-000000400082f000 000000000001f000 r-x +000000400082f000-0000004000830000 0000000000001000 --- +0000004000830000-000000400083f000 000000000000f000 rw- +start_brk 0x0000000000000000 +end_code 0x00000000004009ec +start_code 0x0000000000400000 +start_data 0x0000000000410d68 +end_data 0x0000000000411030 +start_stack 0x00000040007ff7b0 +brk 0x0000000000411038 +entry 0x00000040008111c0 +argv_start 0x00000040007ff7b8 +env_start 0x00000040007ff7c8 +auxv_start 0x00000040007ff9a0 +Hello World... + + +If you objdump the binary and the offending library what do they seem to have been built for ? + +Certainly this: + +0000004000000000-0000004000001000 0000000000001000 --- + +looks like a 4K page when we're trying to load things, so either we got the loading wrong or the binary is 4K. + + +Do binaries have to be page size aware? I thought it was a runtime thing. +However if the aarch64-linux-user is hardwired to 4k it might explain it's +confusion on a 64k machine. + +On Thu, 28 Nov 2019, 16:33 Peter Maydell, <email address hidden> wrote: + +> If you objdump the binary and the offending library what do they seem to +> have been built for ? +> +> Certainly this: +> +> 0000004000000000-0000004000001000 0000000000001000 --- +> +> looks like a 4K page when we're trying to load things, so either we got +> the loading wrong or the binary is 4K. +> +> -- +> You received this bug notification because you are a member of qemu- +> devel-ml, which is subscribed to QEMU. +> https://bugs.launchpad.net/bugs/1853826 +> +> Title: +> ELF loader fails to load shared object on ThunderX2 running RHEL7 +> +> Status in QEMU: +> Incomplete +> +> Bug description: +> Simple test: +> hello.c +> +> include <stdio.h> +> +> int main(int argc, char* argv[]) +> { +> { +> printf("Hello World... \n"); +> } +> return 0; +> } +> +> when compiled with : +> *Compiler +> +> https://developer.arm.com/tools-and-software/server-and-hpc/arm-architecture-tools/arm-allinea-studio/download +> Arm-Compiler-for-HPC_19.3_RHEL_7_aarch64.tar +> +> *Running: +> 1) with -armpl +> armclang -armpl hello.c +> ./qemu/build/aarch64-linux-user/qemu-aarch64 a.out +> 2) without flag +> armclang hello.c +> ./qemu/build/aarch64-linux-user/qemu-aarch64 a.out +> +> •With Docker image: +> CentOS Linux release 7.7.1908 (AltArch) +> +> *Two different machines: +> AArch64, Taishan. tsv110, Kunpeng 920, ARMv8.2-A +> AArch64, Taishan 2280, Cortex-A72, ARMv8-A +> +> *QEMU 4.0 +> qemu-aarch64 version 4.1.91 (v4.2.0-rc1) +> +> +> Results: +> +> +> ****Taishan 2280 Cortex-A72 +> Running +> 1)with -armpl flag with and without the docker +> WORKS-> Hello World... +> -> ldd a.out +> ldd a.out +> linux-vdso.so.1 => (0x0000ffffbc6a2000) +> libamath_generic.so => +> /scratch/arm-linux-compiler-19.3_Generic-AArch64_RHEL-8_aarch64-linux/lib/clang/9.0.1/armpl_links/lib/libamath_generic.so +> (0x0000ffffbc544000) +> libm.so.6 => /lib64/libm.so.6 (0x0000ffffbc493000) +> libastring_generic.so => +> /scratch/arm-linux-compiler-19.3_Generic-AArch64_RHEL-8_aarch64-linux/lib/clang/9.0.1/armpl_links/lib/libastring_generic.so +> (0x0000ffffbc472000) libarmflang.so => +> /scratch/arm-linux-compiler-19.3_Generic-AArch64_RHEL-8_aarch64-linux/lib/libarmflang.so +> (0x0000ffffbbfd3000) +> libomp.so => +> /scratch/arm-linux-compiler-19.3_Generic-AArch64_RHEL-8_aarch64-linux/lib/libomp.so +> (0x0000ffffbbef5000) +> librt.so.1 => /lib64/librt.so.1 (0x0000ffffbbed4000) +> libpthread.so.0 => /lib64/libpthread.so.0 (0x0000ffffbbe9f000) +> libarmpl_lp64_generic.so => +> /scratch/arm-linux-compiler-19.3_Generic-AArch64_RHEL-8_aarch64-linux/lib/clang/9.0.1/armpl_links/lib/libarmpl_lp64_generic.so +> (0x0000ffffb3306000) +> libc.so.6 => /lib64/libc.so.6 (0x0000ffffb3180000) +> libstdc++.so.6 => +> /scratch/gcc-9.2.0_Generic-AArch64_RHEL-8_aarch64-linux/lib64/libstdc++.so.6 +> (0x0000ffffb2f30000) +> libgcc_s.so.1 => +> /scratch/gcc-9.2.0_Generic-AArch64_RHEL-8_aarch64-linux/lib64/libgcc_s.so.1 +> (0x0000ffffb2eff000) +> libdl.so.2 => /lib64/libdl.so.2 (0x0000ffffb2ede000) +> /lib/ld-linux-aarch64.so.1 (0x0000ffffbc674000) +> +> +> Running +> 2) without -armpl flag with and without the docker +> WORKS -> Hello World... +> -> ldd a.out +> ldd a.out +> linux-vdso.so.1 => (0x0000ffffa6895000) +> libastring_generic.so => +> /scratch/arm-linux-compiler-19.3_Generic-AArch64_RHEL-8_aarch64-linux/lib/clang/9.0.1/armpl_links/lib/libastring_generic.so +> (0x0000ffffa6846000) +> libc.so.6 => /lib64/libc.so.6 (0x0000ffffa66c0000) +> /lib/ld-linux-aarch64.so.1 (0x0000ffffa6867000) +> +> +> ****Taishan - tsv110 Kunpeng 920 +> For Running +> +> 1)with -armpl flag with and without the docker +> DOES NOT WORK -> with and without Docker +> -> It shows : qemu:handle_cpu_signal received +> signal outside vCPU +> context @ pc=0xffffaaa8844a +> -> ldd a.out +> ldd a.out +> linux-vdso.so.1 => (0x0000ffffad4b0000) +> libamath_generic.so => +> /scratch/arm-linux-compiler-19.3_Generic-AArch64_RHEL-8_aarch64-linux/lib/clang/9.0.1/armpl_links/lib/libamath_generic.so +> (0x0000ffffad370000) +> libm.so.6 => /lib64/libm.so.6 (0x0000ffffad2a0000) +> libastring_generic.so => +> /scratch/arm-linux-compiler-19.3_Generic-AArch64_RHEL-8_aarch64-linux/lib/clang/9.0.1/armpl_links/lib/libastring_generic.so +> (0x0000ffffad270000) libarmflang.so => +> /scratch/arm-linux-compiler-19.3_Generic-AArch64_RHEL-8_aarch64-linux/lib/libarmflang.so +> (0x0000ffffacdd0000) +> libomp.so => +> /scratch/arm-linux-compiler-19.3_Generic-AArch64_RHEL-8_aarch64-linux/lib/libomp.so +> (0x0000ffffaccf0000) +> librt.so.1 => /lib64/librt.so.1 (0x0000ffffaccc0000) +> libpthread.so.0 => /lib64/libpthread.so.0 (0x0000ffffacc80000) +> libarmpl_lp64_generic.so => +> /scratch/arm-linux-compiler-19.3_Generic-AArch64_RHEL-8_aarch64-linux/lib/clang/9.0.1/armpl_links/lib/libarmpl_lp64_generic.so +> (0x0000ffffa40e0000) +> libc.so.6 => /lib64/libc.so.6 (0x0000ffffa3f50000) +> libstdc++.so.6 => +> /scratch/gcc-9.2.0_Generic-AArch64_RHEL-8_aarch64-linux/lib64/libstdc++.so.6 +> (0x0000ffffa3d00000) +> libgcc_s.so.1 => +> /scratch/gcc-9.2.0_Generic-AArch64_RHEL-8_aarch64-linux/lib64/libgcc_s.so.1 +> (0x0000ffffa3cc0000) +> libdl.so.2 => /lib64/libdl.so.2 (0x0000ffffa3c90000) +> /lib/ld-linux-aarch64.so.1 (0x0000ffffad4c0000) +> +> +> Running +> 2) without -armpl flag with and without the docker +> WORKS -> Hello World.. +> -> ldd a.out +> ldd a.out +> linux-vdso.so.1 => (0x0000ffff880c0000) +> libastring_generic.so => +> /scratch/arm-linux-compiler-19.3_Generic-AArch64_RHEL-8_aarch64-linux/lib/clang/9.0.1/armpl_links/lib/libastring_generic.so +> (0x0000ffff88080000) +> libc.so.6 => /lib64/libc.so.6 (0x0000ffff87ee0000) +> /lib/ld-linux-aarch64.so.1 (0x0000ffff880d0000) +> +> To manage notifications about this bug go to: +> https://bugs.launchpad.net/qemu/+bug/1853826/+subscriptions +> +> + + +[Expired for QEMU because there has been no activity for 60 days.] + diff --git a/results/classifier/108/other/1853898 b/results/classifier/108/other/1853898 new file mode 100644 index 000000000..992be9af8 --- /dev/null +++ b/results/classifier/108/other/1853898 @@ -0,0 +1,55 @@ +device: 0.846 +network: 0.754 +other: 0.726 +socket: 0.656 +graphic: 0.652 +KVM: 0.627 +files: 0.597 +performance: 0.567 +permissions: 0.552 +PID: 0.546 +debug: 0.523 +semantic: 0.506 +boot: 0.456 +vnc: 0.455 + +qemu/hw/scsi/lsi53c895a.c:417: lsi_soft_reset: Assertion `QTAILQ_EMPTY(&s->queue)' failed. + +While experimenting with blkdebug I can consistently hit this assertion in lsi53c895a.c. + +Using locally built from master, commit: 2061735ff09f9d5e67c501a96227b470e7de69b1 + +Steps to reproduce + +$ qemu-img create -f raw empty.raw 8G +$ sudo losetup -f --show empty.raw +$ sudo mkfs.ext3 /dev/loop0 +$ sudo losetup -D + +$ cat blkdebug.conf +[inject-error] +event = "read_aio" +errno = "5" + +$ qemu-system-x86_64 -enable-kvm -m 2048 -cpu host -smp 4 -nic user,ipv6=off,model=e1000,hostfwd=tcp::2222-:22,net=172.16.0.0/255.255.255.0 -device lsi53c895a,id=scsi -device scsi-hd,drive=hd,wwn=12345678 -drive if=scsi,id=hd,file=blkdebug:blkdebug.conf:empty.raw,cache=none,format=raw -cdrom Fedora-Server-dvd-x86_64-31-1.9.iso -display gtk + +Initiate install from cdrom ISO image results in: + +qemu-system-x86_64: /builddir/build/BUILD/qemu-3.1.1/hw/scsi/lsi53c895a.c:381: lsi_soft_reset: Assertion `QTAILQ_EMPTY(&s->queue)' failed. +Aborted (core dumped) + +The QEMU project is currently considering to move its bug tracking to +another system. For this we need to know which bugs are still valid +and which could be closed already. Thus we are setting older bugs to +"Incomplete" now. + +If you still think this bug report here is valid, then please switch +the state back to "New" within the next 60 days, otherwise this report +will be marked as "Expired". Or please mark it as "Fix Released" if +the problem has been solved with a newer version of QEMU already. + +Thank you and sorry for the inconvenience. + + +[Expired for QEMU because there has been no activity for 60 days.] + |