summaryrefslogtreecommitdiffstats
path: root/results/classifier/108/none/185
diff options
context:
space:
mode:
Diffstat (limited to '')
-rw-r--r--results/classifier/108/none/18516
-rw-r--r--results/classifier/108/none/1851552334
-rw-r--r--results/classifier/108/none/185420467
-rw-r--r--results/classifier/108/none/185500249
-rw-r--r--results/classifier/108/none/185561776
-rw-r--r--results/classifier/108/none/185639961
6 files changed, 603 insertions, 0 deletions
diff --git a/results/classifier/108/none/185 b/results/classifier/108/none/185
new file mode 100644
index 00000000..bab3cafc
--- /dev/null
+++ b/results/classifier/108/none/185
@@ -0,0 +1,16 @@
+semantic: 0.381
+boot: 0.300
+debug: 0.284
+performance: 0.273
+graphic: 0.244
+KVM: 0.154
+vnc: 0.142
+network: 0.115
+device: 0.062
+PID: 0.021
+permissions: 0.008
+other: 0.002
+socket: 0.001
+files: 0.000
+
+Coroutines: Audit use of "coroutine_fn" specifier
diff --git a/results/classifier/108/none/1851552 b/results/classifier/108/none/1851552
new file mode 100644
index 00000000..4b0309a2
--- /dev/null
+++ b/results/classifier/108/none/1851552
@@ -0,0 +1,334 @@
+graphic: 0.306
+network: 0.297
+PID: 0.295
+other: 0.291
+performance: 0.290
+permissions: 0.288
+boot: 0.284
+semantic: 0.268
+debug: 0.256
+device: 0.255
+files: 0.225
+socket: 0.194
+vnc: 0.188
+KVM: 0.170
+
+since ubuntu 18 bionic release and latest, the ubuntu18 cloud image is unable to boot up on openstack instance
+
+Openstack Queens release which is running on ubuntu 18 LTS Controller and Compute.
+Tried to boot up the instance via horizon dashboard without success.
+Nova flow works perfect.
+When access to console I discovered that the boot process stuck in the middle.
+[[0;1;31m TIME [0m] Timed out waiting for device dev-vdb.device.
+[[0;1;33mDEPEND[0m] Dependency failed for /mnt.
+[[0;1;33mDEPEND[0m] Dependency failed for File System Check on /dev/vdb.
+It receives IP but looks like not get configured at time.
+since ubuntu 18 there is netplan feature managing the network interfaces
+please advise.
+
+more details as follow:
+https://bugs.launchpad.net/networking-calico/+bug/1851548
+
+Hello,
+
+Could you please elaborate a bit more on how you came to the conclusion that the problem is caused specifically by cloud-init? Without some more context information it's difficult for us to tell if this is actually a bug and to begin working on it.
+
+If you think this is actually a problem with cloud-init, could you please run `cloud-init collect-logs` and attach the generated tarball to this bug report? The collected logs will help us understand what's going on.
+
+I'm marking this report as Incomplete for the moment, please change its status back to New after providing additional information. Thanks!
+
+The instance creation was working until calico configured on controller and compute. Ubuntu 16 and Centos releases are booting up successfully. As known ubuntu began to work with netplan since 18 and all latest releases. Not sure the issue with cloud init or the order or timing of booting process which relates to getting IP in time and properly.
+
+
+the problem is in:
+
+[[0;32m OK [0m] Started Wait for Network to be Configured.
+
+
+I used the following official latest ubuntu bionic image:
+http://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img
+
+And the regular openstack command:
+https://docs.openstack.org/mitaka/install-guide-ubuntu/launch-instance-provider.html
+
+openstack server create --flavor ubuntu-flavor --image ubuntu-bionic-latest \
+ --nic net-id=c9d82a5d-e075-4d66-8ecd-1092fa218ad7 --security-group allow_all \
+ --key-name cloud-keypair.private ubuntu-bionic-instance
+
+more details as follow:
+https://bugs.launchpad.net/networking-calico/+bug/1851548
+
+see full log https://etherpad.openstack.org/p/ubuntu-xenial-log for successful creation of instance with latest ubuntu-xenial cloud image http://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img
+
+see full log https://etherpad.openstack.org/p/ubuntu-bionic-log for successful creation of instance with latest ubuntu-bionic cloud image http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img
+
+correction:
+
+see full log https://etherpad.openstack.org/p/ubuntu-xenial-log for successful creation of instance with latest ubuntu-xenial cloud image http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img
+
+see full log https://etherpad.openstack.org/p/ubuntu-bionic-log for NOT SUCCESSFUL creation of instance with latest ubuntu-bionic cloud image http://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img
+
+Hi.
+Please attach the output of 'cloud-init collect-logs'. Ideally from the 18.04 instance, but the 16.04 instance would be fine if you're not able to get it from 18.04.
+
+Then, set the status of this bug back to New.
+
+thanks.
+
+
+see attached collected cloud init logs from ubuntu xenial..
+
+
+Hi Vasili, unfortunately there isn't enough info in the 16.04 logs to help us work out what's going on with 18.04. Do you have any way of accessing an 18.04 instance (serial console, perhaps?) that would allow you to gather more data?
+
+Moving this back to Incomplete for now, apologies for the round trips!
+
+> [[0;1;33mDEPEND[0m] Dependency failed for File System Check on /dev/vdb.
+
+Looking at the bionic log you posted, it never gets a /dev/vdb device. Can you confirm that the VM configuration on the compute node correctly was configured with an ephemeral block device?
+
+
+Here we can see not all of the block devices expected are present...
+
+[[0;32m OK [0m] Started udev Coldplug all Devices.
+[[0m[0;31m* [0m] (1 of 3) A start job is running for���label-UEFI.device (19s / 1min 30s)[K[[0;1;31m*[0m[0;31m*
+
+
+Also, looking at the 16.04 boot, it looks like this is nested virtualization, I can see in the journal that the xenial kernel is hitting this one:
+
+Nov 24 16:18:43.138019 ubuntu kernel: ------------[ cut here ]------------
+Nov 24 16:18:43.138390 ubuntu kernel: WARNING: CPU: 0 PID: 0 at /build/linux-mU1Buo/linux-4.4.0/arch/x86/kernel/fpu/xstate.c:517 fpu__init_system_xstate+0x37e/0x764()
+Nov 24 16:18:43.138624 ubuntu kernel: XSAVE consistency problem, dumping leaves
+Nov 24 16:18:43.147521 ubuntu kernel: Modules linked in:
+Nov 24 16:18:43.147832 ubuntu kernel:
+Nov 24 16:18:43.148048 ubuntu kernel: CPU: 0 PID: 0 Comm: swapper Not tainted 4.4.0-169-generic #198-Ubuntu
+Nov 24 16:18:43.148268 ubuntu kernel: 0000000000000086 a2c4204db3cb6ecb ffffffff81e03d80 ffffffff8140c8e1
+Nov 24 16:18:43.148582 ubuntu kernel: ffffffff81e03dc8 ffffffff81cb3c68 ffffffff81e03db8 ffffffff81086492
+Nov 24 16:18:43.148788 ubuntu kernel: 0000000000000008 0000000000000440 0000000000000040 ffffffff81e03e4c
+Nov 24 16:18:43.149274 ubuntu kernel: Call Trace:
+Nov 24 16:18:43.149480 ubuntu kernel: [<ffffffff8140c8e1>] dump_stack+0x63/0x82
+Nov 24 16:18:43.149687 ubuntu kernel: [<ffffffff81086492>] warn_slowpath_common+0x82/0xc0
+Nov 24 16:18:43.149892 ubuntu kernel: [<ffffffff8108652c>] warn_slowpath_fmt+0x5c/0x80
+Nov 24 16:18:43.150100 ubuntu kernel: [<ffffffff81081f86>] ? xfeature_size+0x59/0x77
+Nov 24 16:18:43.150343 ubuntu kernel: [<ffffffff81f783f1>] fpu__init_system_xstate+0x37e/0x764
+Nov 24 16:18:43.150549 ubuntu kernel: [<ffffffff81f68120>] ? early_idt_handler_array+0x120/0x120
+Nov 24 16:18:43.150756 ubuntu kernel: [<ffffffff81f77dfa>] fpu__init_system+0x1e7/0x28e
+Nov 24 16:18:43.159459 ubuntu kernel: [<ffffffff81f790a6>] early_cpu_init+0x2b6/0x2bb
+Nov 24 16:18:43.159715 ubuntu kernel: [<ffffffff81f790a6>] ? early_cpu_init+0x2b6/0x2bb
+Nov 24 16:18:43.160030 ubuntu kernel: [<ffffffff81f7439d>] setup_arch+0xc0/0xd1f
+Nov 24 16:18:43.160240 ubuntu kernel: [<ffffffff81f68120>] ? early_idt_handler_array+0x120/0x120
+Nov 24 16:18:43.160528 ubuntu kernel: [<ffffffff81f68c16>] start_kernel+0xe2/0x4a4
+Nov 24 16:18:43.160737 ubuntu kernel: [<ffffffff81f68120>] ? early_idt_handler_array+0x120/0x120
+Nov 24 16:18:43.160926 ubuntu kernel: [<ffffffff81f682da>] x86_64_start_reservations+0x2a/0x2c
+Nov 24 16:18:43.161133 ubuntu kernel: [<ffffffff81f68426>] x86_64_start_kernel+0x14a/0x16d
+Nov 24 16:18:43.161624 ubuntu kernel: ---[ end trace 4d5ff9f2f68c4233 ]---
+
+
+https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1829555
+
+
+
+
+Hi,
+Thanks for shared investigation details.
+According to the setup I use, could you assist to understand where and what I'm missing here that can lead to the issues you mentioned?
+
+### Working setup details ###
+
+[1] The Openstack service installed on Controller and Compute with Ubuntu 18.04.2
+With Minimal deployment for Queens:
+
+Keystone
+Glance
+Nova
+Neutron
+Horizon
+
+Reference:
+https://docs.openstack.org/install-guide/openstack-services.html#minimal-deployment-for-queens
+
+All the configurations done according to the guide (reference I mentioned).
+
+[2] The add-ons features included to Openstack are:
+
+Calico driver plugin for Neutron
+Calico Bird for BGP Peering
+Calico Felix for Security
+Calico DHCP Agent instead Neutron DHCP
+
+Reference:
+https://docs.projectcalico.org/v3.10/getting-started/openstack/
+
+
+
+Thanks,
+Vasili
+
+Status changed to 'Confirmed' because the bug affects multiple users.
+
+Status changed to 'Confirmed' because the bug affects multiple users.
+
+For your Openstack deployment, are you running on baremetal?
+Are you deploying something like devstack or triple-o which enable nested virtualization?
+
+https://docs.openstack.org/devstack/latest/guides/devstack-with-nested-kvm.html
+https://docs.openstack.org/tripleo-quickstart/latest/unprivileged.html
+https://tripleo-docs.readthedocs.io/en/latest/environments/virtual.html
+
+nope, no devstack nor tripleo.. everything straight forward as I mentioned previously.. installed all those services manually on controller and compute and connected to l3 switch with bgp of the bird..
+
+mariadb, rabbitmq, memcached, etcd, keystone, glance, neutron, nova, calico-driver, calico-felix, calico-dhcp, nova-api-metadata, bird
+
+any clue on the issue?
+
+Thanks,
+Vasili
+
+Unfortunately no; the kernel messages are very much related to nested virtualization, but I don't know where in your software stack it gets configured/enabled.
+
+I'm marking the cloud-init task invalid as at this time the logs point to a nested virtualization/openstack issue with devices not being present; not related to cloud-init. If further investigation points to an issue with cloud-init you can move the cloud-init task back to New.
+
+There is no nested virtualization, all the openstack on bare metal with regular installation with regular services, the only thing is running is calico which is eliminate neutron ml2, metadata and dhcp and its running with calico plugin, calico-dhcp and calico felix. As well as on each compute nova-api-metadata is available.
+
+How the devices can be presented, could you advise with further steps of investigation?
+
+
+Best,
+Vasili
+
+
+Sent from iPhone
+
+> On 9 Jan 2020, at 2:31, Ryan Harper <email address hidden> wrote:
+>
+> I'm marking the cloud-init task invalid as at this time the logs point
+> to a nested virtualization/openstack issue with devices not being
+> present; not related to cloud-init. If further investigation points to
+> an issue with cloud-init you can move the cloud-init task back to New.
+>
+> ** Changed in: cloud-init (Ubuntu)
+> Status: Incomplete => Invalid
+>
+> --
+> You received this bug notification because you are subscribed to the bug
+> report.
+> https://bugs.launchpad.net/bugs/1851552
+>
+> Title:
+> since ubuntu 18 bionic release and latest, the ubuntu18 cloud image is
+> unable to boot up on openstack instance
+>
+> Status in cloud-init:
+> New
+> Status in networking-calico:
+> New
+> Status in OpenStack Compute (nova):
+> New
+> Status in OpenStack Community Project:
+> New
+> Status in qemu-kvm:
+> New
+> Status in cloud-init package in Ubuntu:
+> Invalid
+> Status in qemu package in Ubuntu:
+> New
+>
+> Bug description:
+> Openstack Queens release which is running on ubuntu 18 LTS Controller and Compute.
+> Tried to boot up the instance via horizon dashboard without success.
+> Nova flow works perfect.
+> When access to console I discovered that the boot process stuck in the middle.
+> [[0;1;31m TIME [0m] Timed out waiting for device dev-vdb.device.
+> [[0;1;33mDEPEND[0m] Dependency failed for /mnt.
+> [[0;1;33mDEPEND[0m] Dependency failed for File System Check on /dev/vdb.
+> It receives IP but looks like not get configured at time.
+> since ubuntu 18 there is netplan feature managing the network interfaces
+> please advise.
+>
+> more details as follow:
+> https://bugs.launchpad.net/networking-calico/+bug/1851548
+>
+> To manage notifications about this bug go to:
+> https://bugs.launchpad.net/cloud-init/+bug/1851552/+subscriptions
+
+
+Hi Vasili,
+
+From a cloud-init perspective, there isn't anything we can do so I'm going to move the upstream task to Invalid too. I'm afraid I don't really have any advice on how to proceed, as this appears to be a hypervisor or cloud issue.
+
+
+Dan
+
+In Rocky release I’m not experiencing kind of issues. And make sure you use kvm and not qemu, cause qemu is limited on its performance and kvm just born to work with latest hardware :)
+
+Best,
+Vasili
+
+Sent from iPhone
+
+> On 14 Jan 2020, at 20:35, Dan Watkins <email address hidden> wrote:
+>
+> Hi Vasili,
+>
+>> From a cloud-init perspective, there isn't anything we can do so I'm
+> going to move the upstream task to Invalid too. I'm afraid I don't
+> really have any advice on how to proceed, as this appears to be a
+> hypervisor or cloud issue.
+>
+>
+> Dan
+>
+> ** Changed in: cloud-init
+> Status: New => Invalid
+>
+> --
+> You received this bug notification because you are subscribed to the bug
+> report.
+> https://bugs.launchpad.net/bugs/1851552
+>
+> Title:
+> since ubuntu 18 bionic release and latest, the ubuntu18 cloud image is
+> unable to boot up on openstack instance
+>
+> Status in cloud-init:
+> Invalid
+> Status in networking-calico:
+> New
+> Status in OpenStack Compute (nova):
+> New
+> Status in OpenStack Community Project:
+> New
+> Status in qemu-kvm:
+> New
+> Status in cloud-init package in Ubuntu:
+> Invalid
+> Status in qemu package in Ubuntu:
+> New
+>
+> Bug description:
+> Openstack Queens release which is running on ubuntu 18 LTS Controller and Compute.
+> Tried to boot up the instance via horizon dashboard without success.
+> Nova flow works perfect.
+> When access to console I discovered that the boot process stuck in the middle.
+> [[0;1;31m TIME [0m] Timed out waiting for device dev-vdb.device.
+> [[0;1;33mDEPEND[0m] Dependency failed for /mnt.
+> [[0;1;33mDEPEND[0m] Dependency failed for File System Check on /dev/vdb.
+> It receives IP but looks like not get configured at time.
+> since ubuntu 18 there is netplan feature managing the network interfaces
+> please advise.
+>
+> more details as follow:
+> https://bugs.launchpad.net/networking-calico/+bug/1851548
+>
+> To manage notifications about this bug go to:
+> https://bugs.launchpad.net/cloud-init/+bug/1851552/+subscriptions
+
+
+I honestly don't see any evidence of some broken behaviour in Nova if, particularly, other instances with other guest image using cloud-init can boot correctly.
+
+Please provide us some logs or better trace of a potential Nova problem in order for us to classify the potential root cause and a possible solution, but in the meantime I'll have to close this bug from the Nova point of view. You can reopen this bug by changing its status to New.
+
+I don't believe this is to do with networking-calico, so will mark as Invalid for networking-calico.
+
+Tracked in Github Issues as https://github.com/canonical/cloud-init/issues/3491
+
diff --git a/results/classifier/108/none/1854204 b/results/classifier/108/none/1854204
new file mode 100644
index 00000000..adf31ade
--- /dev/null
+++ b/results/classifier/108/none/1854204
@@ -0,0 +1,67 @@
+device: 0.465
+graphic: 0.373
+other: 0.347
+semantic: 0.314
+socket: 0.250
+performance: 0.216
+debug: 0.197
+permissions: 0.184
+boot: 0.171
+vnc: 0.151
+files: 0.149
+PID: 0.148
+network: 0.144
+KVM: 0.066
+
+Menu is not clickable on OSX Catalina
+
+1) Run `qemu-system-x86_64`
+2) Try to click on the main menu
+
+Menu is not clickable until another window is activated and QEMU window is activated again
+
+Does this reproduce on earlier pre-Catalina OSX versions, or is it Catalina-specific?
+
+This is probably not going to be addressed unless somebody wants to investigate and write a patch for it -- OSX support in QEMU is only very barely maintained.
+
+
+Catalina-specific. It also affects several UI toolkits. This workaround https://stackoverflow.com/a/7602677 seems to help.
+
+Thanks for the research. That looks like an OSX bug to me -- the simple two-line set of operations to do this that previously worked now needs some horribly complicated multi-stage sequence, and the first suggested workaround is doing something strange involving the Finder and a 0.1 second delay which I am definitely suspicious of. Apple should fix their OS to stop breaking applications :-)
+
+The second suggested approach is essentially to add:
+
+ SetSystemUIMode(kUIModeNormal, 0);
+ [NSApp activateIgnoringOtherApps:YES];
+
+which for us probably would want to go before the [NSApp run] (or at least after we've created the sharedApplication).
+
+It's a bit odd also that the stackoverflow question says this only happens "if you don't call [TransformProcessType] early enough", because QEMU calls that about as early as it is feasibly possible to do, very near the top of main() in cocoa.m.
+
+
+The workaround on SO was posted way back in 2011. It has initially solved a different issue, I think.
+
+The QEMU project is currently considering to move its bug tracking to
+another system. For this we need to know which bugs are still valid
+and which could be closed already. Thus we are setting older bugs to
+"Incomplete" now.
+
+If you still think this bug report here is valid, then please switch
+the state back to "New" within the next 60 days, otherwise this report
+will be marked as "Expired". Or please mark it as "Fix Released" if
+the problem has been solved with a newer version of QEMU already.
+
+Thank you and sorry for the inconvenience.
+
+
+Still a bug.
+
+
+
+This is an automated cleanup. This bug report has been moved to QEMU's
+new bug tracker on gitlab.com and thus gets marked as 'expired' now.
+Please continue with the discussion here:
+
+ https://gitlab.com/qemu-project/qemu/-/issues/225
+
+
diff --git a/results/classifier/108/none/1855002 b/results/classifier/108/none/1855002
new file mode 100644
index 00000000..d6220537
--- /dev/null
+++ b/results/classifier/108/none/1855002
@@ -0,0 +1,49 @@
+device: 0.578
+graphic: 0.453
+socket: 0.367
+performance: 0.365
+other: 0.304
+vnc: 0.239
+boot: 0.239
+PID: 0.231
+permissions: 0.227
+network: 0.212
+debug: 0.212
+files: 0.151
+semantic: 0.121
+KVM: 0.028
+
+Cannot boot arm kernel images on s390x
+
+While running the acceptance tests on s390x, the arm tests under qemu/tests/acceptance/boot_linux_console.py will timeout, except the test using u-boot. All the arm tests run without problems on x86 and ppc.
+
+This test boots the kernel and wait for a kernel panic to make sure it can boot that kind of kernel on the host running the test. The URL for the kernels are available inside the python test code, but I'm listing them here:
+
+Fail: https://archives.fedoraproject.org/pub/archive/fedora/linux/releases/29/Everything/armhfp/os/images/pxeboot/vmlinuz
+Fail: http://archive.raspberrypi.org/debian/pool/main/r/raspberrypi-firmware/raspberrypi-kernel_1.20190215-1_armhf.deb
+Fail: https://snapshot.debian.org/archive/debian/20190928T224601Z/pool/main/l/linux/linux-image-4.19.0-6-armmp_4.19.67-2+deb10u1_armhf.deb
+Pass: https://raw.githubusercontent.com/Subbaraya-Sundeep/qemu-test-binaries/fa030bd77a014a0b8e360d3b7011df89283a2f0b/spi.bin
+
+I tried to manually investigate the problem with the first kernel of the list. The command I used to try to boot it was:
+
+/home/linux1/src/v4.2.0-rc3/bin/qemu-system-arm -serial stdio -machine virt -kernel /home/linux1/venv/python3/data/cache/by_location/1d5fdf8018e79b806aa982600c0866b199946efc/vmlinuz
+-append "printk.time=0 console=ttyAMA0"
+
+On an x86 machine, I can see it boots and ends with a kernel panic as expected. On s390x, it just hangs.
+
+I also tried to debug with gdb, redirecting the monitor and the serial console to other terminal sessions without success.
+
+QEMU version is the latest as of today,tag v4.2.0-rc4, commit 1bdc319ab5d289ce6b822e06fb2b13666fd9278e.
+
+s390x system is a Red Hat Enterprise Linux Server 7.7 running as a z/VM 6.4.0 guest at IBM LinuxONE Community Cloud.
+
+x86 system is a Fedora 31 running on Intel i7-8650U.
+
+
+This is an automated cleanup. This bug report has been moved to QEMU's
+new bug tracker on gitlab.com and thus gets marked as 'expired' now.
+Please continue with the discussion here:
+
+ https://gitlab.com/qemu-project/qemu/-/issues/187
+
+
diff --git a/results/classifier/108/none/1855617 b/results/classifier/108/none/1855617
new file mode 100644
index 00000000..8269d751
--- /dev/null
+++ b/results/classifier/108/none/1855617
@@ -0,0 +1,76 @@
+other: 0.529
+graphic: 0.526
+PID: 0.500
+device: 0.495
+permissions: 0.484
+network: 0.481
+performance: 0.442
+semantic: 0.442
+vnc: 0.442
+KVM: 0.435
+files: 0.418
+socket: 0.380
+boot: 0.366
+debug: 0.318
+
+savevm with hax saves wrong register state
+
+I use qemu-i386 with IntelHaxm on Windows 10 x64 host with Windows 7 x86 guest. I run the guest till OS loads and create a snapshot with savevm, then close qemu, run it again and try to load the snapshot with loadvm. The guest crashes or freezes. I dumped registers on snapshot creation and loading (in Haxm) and found that they are different.
+When returning from Haxm in hax_vcpu_hax_exec, there is no regular register read. I found hax_arch_get_registers function which reads registers from Haxm and is called from a synchronization procedure. I placed a breakpoint on it, ran qemu and found that it is hit one time during guest OS boot. Exactly these registers where saved in the snapshot.
+
+cc'ing Colin and Yu for Hax info:
+
+* Alex (<email address hidden>) wrote:
+> Public bug reported:
+>
+> I use qemu-i386 with IntelHaxm on Windows 10 x64 host with Windows 7 x86 guest. I run the guest till OS loads and create a snapshot with savevm, then close qemu, run it again and try to load the snapshot with loadvm. The guest crashes or freezes. I dumped registers on snapshot creation and loading (in Haxm) and found that they are different.
+> When returning from Haxm in hax_vcpu_hax_exec, there is no regular register read. I found hax_arch_get_registers function which reads registers from Haxm and is called from a synchronization procedure. I placed a breakpoint on it, ran qemu and found that it is hit one time during guest OS boot. Exactly these registers where saved in the snapshot.
+>
+> ** Affects: qemu
+> Importance: Undecided
+> Status: New
+>
+> --
+> You received this bug notification because you are a member of qemu-
+> devel-ml, which is subscribed to QEMU.
+> https://bugs.launchpad.net/bugs/1855617
+>
+> Title:
+> savevm with hax saves wrong register state
+>
+> Status in QEMU:
+> New
+>
+> Bug description:
+> I use qemu-i386 with IntelHaxm on Windows 10 x64 host with Windows 7 x86 guest. I run the guest till OS loads and create a snapshot with savevm, then close qemu, run it again and try to load the snapshot with loadvm. The guest crashes or freezes. I dumped registers on snapshot creation and loading (in Haxm) and found that they are different.
+> When returning from Haxm in hax_vcpu_hax_exec, there is no regular register read. I found hax_arch_get_registers function which reads registers from Haxm and is called from a synchronization procedure. I placed a breakpoint on it, ran qemu and found that it is hit one time during guest OS boot. Exactly these registers where saved in the snapshot.
+>
+> To manage notifications about this bug go to:
+> https://bugs.launchpad.net/qemu/+bug/1855617/+subscriptions
+>
+--
+Dr. David Alan Gilbert / <email address hidden> / Manchester, UK
+
+
+
+The QEMU project is currently considering to move its bug tracking to
+another system. For this we need to know which bugs are still valid
+and which could be closed already. Thus we are setting older bugs to
+"Incomplete" now.
+
+If you still think this bug report here is valid, then please switch
+the state back to "New" within the next 60 days, otherwise this report
+will be marked as "Expired". Or please mark it as "Fix Released" if
+the problem has been solved with a newer version of QEMU already.
+
+Thank you and sorry for the inconvenience.
+
+
+
+This is an automated cleanup. This bug report has been moved to QEMU's
+new bug tracker on gitlab.com and thus gets marked as 'expired' now.
+Please continue with the discussion here:
+
+ https://gitlab.com/qemu-project/qemu/-/issues/188
+
+
diff --git a/results/classifier/108/none/1856399 b/results/classifier/108/none/1856399
new file mode 100644
index 00000000..c5393266
--- /dev/null
+++ b/results/classifier/108/none/1856399
@@ -0,0 +1,61 @@
+other: 0.454
+device: 0.411
+semantic: 0.295
+graphic: 0.288
+KVM: 0.275
+performance: 0.243
+network: 0.237
+PID: 0.198
+permissions: 0.185
+socket: 0.185
+boot: 0.158
+files: 0.158
+vnc: 0.140
+debug: 0.124
+
+Intel GVT-g works in X11, segfaults in wayland
+
+Hello,
+
+I am using an uptodate Arch Linux 64bit with qemu version 4.2.0, but the problem was also present in older versions. The problem occurs with Linux 5.4 and 4.19.
+The problem also occurs with Debian as guest. I am running sway.
+If I provide -vga std, then qemu works fine until I use the qemu window to switch to the vfio-pci device. There are no problems under X11/xwayland at all.
+
+
+Commandline:
+qemu-system-x86_64
+ -enable-kvm
+ -cpu host
+ -smp 2
+ -m 8192
+ -display gtk,gl=on
+ -device vfio-pci,sysfsdev=/sys/devices/pci0000:00/0000:00:02.0/[ID]/,x-igd-opregion=on,display=on
+ -cdrom archlinux-2019.11.01-x86_64.iso
+ -vga none
+
+Forgot to mention, the crash is a segfault.
+If there is more information needed, I am happy to provide it.
+
+The QEMU project is currently considering to move its bug tracking to
+another system. For this we need to know which bugs are still valid
+and which could be closed already. Thus we are setting older bugs to
+"Incomplete" now.
+
+If you still think this bug report here is valid, then please switch
+the state back to "New" within the next 60 days, otherwise this report
+will be marked as "Expired". Or please mark it as "Fix Released" if
+the problem has been solved with a newer version of QEMU already.
+
+Thank you and sorry for the inconvenience.
+
+
+Still present on qemu 5.2.0 and Linux 5.10
+
+
+This is an automated cleanup. This bug report has been moved to QEMU's
+new bug tracker on gitlab.com and thus gets marked as 'expired' now.
+Please continue with the discussion here:
+
+ https://gitlab.com/qemu-project/qemu/-/issues/189
+
+