summary refs log tree commit diff stats
path: root/results/classifier/108/other/1682
diff options
context:
space:
mode:
Diffstat (limited to '')
-rw-r--r--results/classifier/108/other/168218
-rw-r--r--results/classifier/108/other/168209394
-rw-r--r--results/classifier/108/other/168268192
3 files changed, 204 insertions, 0 deletions
diff --git a/results/classifier/108/other/1682 b/results/classifier/108/other/1682
new file mode 100644
index 00000000..8eed5b39
--- /dev/null
+++ b/results/classifier/108/other/1682
@@ -0,0 +1,18 @@
+device: 0.910
+performance: 0.446
+network: 0.405
+boot: 0.363
+other: 0.323
+permissions: 0.297
+graphic: 0.280
+semantic: 0.268
+files: 0.223
+socket: 0.133
+debug: 0.118
+vnc: 0.088
+PID: 0.050
+KVM: 0.009
+
+QEMU-USER macOS support
+Additional information:
+
diff --git a/results/classifier/108/other/1682093 b/results/classifier/108/other/1682093
new file mode 100644
index 00000000..3ea4d1c2
--- /dev/null
+++ b/results/classifier/108/other/1682093
@@ -0,0 +1,94 @@
+other: 0.916
+performance: 0.911
+semantic: 0.910
+device: 0.898
+permissions: 0.898
+PID: 0.889
+graphic: 0.889
+debug: 0.880
+vnc: 0.850
+boot: 0.843
+KVM: 0.842
+socket: 0.841
+files: 0.840
+network: 0.837
+
+aarch64-softmmu "bad ram pointer" crash
+
+I am developing a piece of software called SimBench which is a benchmarking system for full system simulators. I am currently porting this to aarch64, using QEMU as a test platform. 
+
+I have encountered a 'bad ram pointer' crash. I've attempted to build a minimum test case, but I haven't managed to replicate the behaviour in isolation, so I've created a branch of my project which exhibits the crash: https://bitbucket.org/Awesomeclaw/simbench/get/qemu-bug.tar.gz
+
+The package can be compiled using:
+
+make
+
+and then run using:
+
+qemu-system-aarch64  -M virt -m 512 -cpu cortex-a57 -kernel out/armv8/virt/simbench -nographic
+
+I have replicated the issue in both qemu 2.8.1 and in 2.9.0-rc3, on Fedora 23. Please let me know if you need any more information or any logs/core dumps/etc.
+
+I've done some investigation and it appears that this bug is caused by the following:
+
+1. The flash memory of the virt platform is initialised as a cfi.pflash01. It has a memory region with romd_mode = true and rom_device = true
+
+2. Some code stored in the flash memory is executed. This causes the memory to be loaded into the TLB.
+
+3. The code is overwritten. This causes the romd_mode of the flash memory to be reset. It also causes the code to be evicted from the TLB.
+
+4. An attempt is made to execute the code again (cpu_exec(), cpu-exec.c:677)
+4a. Eventually, QEMU attempts to refill the TLB (softmmu_template.h:127)
+4b. We try to fill in the tlb entry (tlb_set_page_with_attrs, cputlb.c:602)
+4b. The flash memory no longer appears to be a ram or romd (cputlb.c:632)
+4c. QEMU decides that the flash memory is an IO device (cputlb.c:634)
+4d. QEMU aborts while trying to fill in the rest of the TLB entry (qemu_ram_addr_from_host_nofail)
+
+I have built a MWE (which I have attached) which produces this behaviour in git head. I'm not exactly sure what a fix for this should look like: AFAIK it's not technically valid to write into flash, but I'm not sure that QEMU crashing should be considered correct behaviour. 
+
+On 12 April 2017 at 16:02, Harry Wagstaff <email address hidden> wrote:
+> I've done some investigation and it appears that this bug is caused by
+> the following:
+>
+> 1. The flash memory of the virt platform is initialised as a
+> cfi.pflash01. It has a memory region with romd_mode = true and
+> rom_device = true
+>
+> 2. Some code stored in the flash memory is executed. This causes the
+> memory to be loaded into the TLB.
+>
+> 3. The code is overwritten. This causes the romd_mode of the flash
+> memory to be reset. It also causes the code to be evicted from the TLB.
+>
+> 4. An attempt is made to execute the code again (cpu_exec(), cpu-exec.c:677)
+> 4a. Eventually, QEMU attempts to refill the TLB (softmmu_template.h:127)
+> 4b. We try to fill in the tlb entry (tlb_set_page_with_attrs, cputlb.c:602)
+> 4b. The flash memory no longer appears to be a ram or romd (cputlb.c:632)
+> 4c. QEMU decides that the flash memory is an IO device (cputlb.c:634)
+> 4d. QEMU aborts while trying to fill in the rest of the TLB entry (qemu_ram_addr_from_host_nofail)
+
+Yeah, this is a known bug -- but fixing it would just mean that
+we would print the slightly more helpful message about the
+guest attempting to execute from something that isn't RAM
+or ROM before exiting.
+
+See for instance this thread from January.
+https://lists.nongnu.org/archive/html/qemu-devel/2017-01/msg00674.html
+
+> I have built a MWE (which I have attached) which produces this behaviour
+> in git head. I'm not exactly sure what a fix for this should look like:
+> AFAIK it's not technically valid to write into flash, but I'm not sure
+> that QEMU crashing should be considered correct behaviour.
+
+You should fix your guest so that it doesn't try to execute
+from flash without putting the flash back into the mode you
+can execute from...
+
+Writing to the flash device is permitted -- it's how
+you program it (you write command bytes to it, and
+read back responses and status and so on).
+
+thanks
+-- PMM
+
+
diff --git a/results/classifier/108/other/1682681 b/results/classifier/108/other/1682681
new file mode 100644
index 00000000..56706060
--- /dev/null
+++ b/results/classifier/108/other/1682681
@@ -0,0 +1,92 @@
+permissions: 0.801
+performance: 0.782
+semantic: 0.770
+graphic: 0.769
+network: 0.747
+PID: 0.732
+device: 0.716
+other: 0.708
+debug: 0.681
+files: 0.651
+socket: 0.642
+boot: 0.638
+vnc: 0.577
+KVM: 0.491
+
+qemu 2.5 network model rtl8139 collisions Ubuntu 16.04.2 LTS
+
+When I use NIC model rtl8139, I have a lot collisions and very low transfer.
+I tested that with brctl and Open vSwitch, because I thought that was a vSwitch issue. 
+When I change NIC model to virtio all works as expect.
+
+Host: Ubuntu 16.04.2 LTS
+Guest: Ubuntu 14.04.5 LTS - affected
+Guest: Ubuntu 16.04.2 LTS - not affected
+
+QEMU emulator version 2.5.0 (Debian 1:2.5+dfsg-5ubuntu10.10)
+
+Thanks Thomas for reassigning, and hi Bartłomeij,
+Btw - I'd recommend very much to user virtio over rtl driver anyway [1], but that is not the point here.
+
+Thanks for retesting with brctl and moving OVS out of the equation already.
+The difference certainly is within the guest drivers for that network card between the 14.04 and 16.04 guest.
+
+I checked the changes we had in between the respective kernels and there were not that much for the drivers themselves at least. Mostly bug fixes and while it could be anything else in the kernels this is certainly worth a quick test. There is one in particular which could be interesting that enabled TSO offloading by default.
+You can check in your guests with
+ $ ethtool -k <device>
+what the offloads currently are.
+Please check if more differ than just TSO (usually the list grows the newer things get).
+Then on the 16.04 guest modify the config one-by-one to match the one you have seen with the 14.04 guest.
+If you happen to find a single offload feature that switches good/bad behavior get back here.
+
+Furthermore we can exclude other packages here by using the HWE kernels [2]. Could you confirm that the 14.04 guest with the HWE-x kernel booted shows the same bad behavior?
+That would exclude anything of 16.04 other than the kernel to cause the difference.
+
+If so it would be great to further shrink the range we are looking at by trying HWE-
+To do so in your case take the 14.04 the guest and install the packages
+linux-image-virtual-lts-utopic, linux-image-virtual-lts-vivid, linux-image-virtual-lts-wily, linux-image-virtual-lts-xenial. Then modify the boot loader (or interactively select at the prompt) to boot one after the other and check your results as well as the maybe related offload settings of above.
+
+Also to better reproduce this could you outline what kind/direction of workload you are testing
+- Is it Guest-to-Guest or Traffic from the outside?
+- What is the network traffic you are using, can it be in archive net tools or only a custom workload in your setup?
+
+Summary:
+- please verify that the same happens in 14.04 + HWE+x kernel (go on with that setup if it shows the issue)
+- please check different HWE level which is the first to show the issue (go on with the oldest of the HWE kernels that show the issue)
+- please compare and test different offload settings as outlined above
+- please describe your workload more in Details so that we can try to reproduce
+
+[1]: http://www.linux-kvm.org/page/Tuning_KVM
+[2]: https://wiki.ubuntu.com/Kernel/LTSEnablementStack
+
+A quick check on a Trusty Guest modified from the uvt default of virtio to use rtl8139 and then moving into the kernel that is likely the reason shows me this for a trivial 1 connection duplex iperf streaming load to the Host:
+
+Release Nettype Kernel   - Result
+1. 14.04 virtio 3.13.0-116  - ~11 +   8 GBits/s
+2. 14.04 rtl8139 3.13.0-116 - 124 + 824 Mbits/s
+3. 14.04 rtl8139 4.4.0-72   - 758 + 703 MBits/s
+4. 14.04 rtl8139 4.4.0-72 - 115 + 795 MBits/s
+
+Notes:
+On #2: I already see 13k receive drops here
+on #3: I can confirm TSO, GSO, SG and IP Checksum offloads on as expected, they help to speed up my load despite now seeing 26k receive drops
+On #4: slow again back to ~14k drops
+
+Note: disabling offloads via:
+$ sudo ethtool -K eth0 tx-tcp-segmentation off
+$ sudo ethtool -K eth0 tx-checksum-ipv4 off
+$ sudo ethtool -K eth0 tx-scatter-gather off
+
+There is quite a chance that the generally much better behavior of enabling those offloads for your specific case it is a drawback. In that case please check with the disabling of the offloads and help to clarify the details I asked for.
+
+Yet overall IMHO - as I stated in my first comment - I'd strongly vote to use the virtio driver and be much faster than any rtl8139 based network would be.
+
+The affected Ubuntu 14.04.5 LTS has kernel HWE 4.4.0-72-generic 
+
+Usually I'm using virtio but that was first time when I used vSwitch and I would rather start with rtl8139.
+
+Because that is my production env I need to prepare new VMs. It's take me few days.
+
+
+[Expired for qemu (Ubuntu) because there has been no activity for 60 days.]
+