summary refs log tree commit diff stats
path: root/results/classifier/zero-shot/012/x86
diff options
context:
space:
mode:
Diffstat (limited to 'results/classifier/zero-shot/012/x86')
-rw-r--r--results/classifier/zero-shot/012/x86/2221921061
-rw-r--r--results/classifier/zero-shot/012/x86/43643137556
-rw-r--r--results/classifier/zero-shot/012/x86/gitlab_semantic_addsubps46
-rw-r--r--results/classifier/zero-shot/012/x86/gitlab_semantic_adox59
-rw-r--r--results/classifier/zero-shot/012/x86/gitlab_semantic_bextr48
-rw-r--r--results/classifier/zero-shot/012/x86/gitlab_semantic_blsi43
-rw-r--r--results/classifier/zero-shot/012/x86/gitlab_semantic_blsmsk50
-rw-r--r--results/classifier/zero-shot/012/x86/gitlab_semantic_bzhi61
8 files changed, 924 insertions, 0 deletions
diff --git a/results/classifier/zero-shot/012/x86/22219210 b/results/classifier/zero-shot/012/x86/22219210
new file mode 100644
index 000000000..2034aacbd
--- /dev/null
+++ b/results/classifier/zero-shot/012/x86/22219210
@@ -0,0 +1,61 @@
+x86: 0.860
+architecture: 0.818
+graphic: 0.701
+performance: 0.498
+device: 0.489
+mistranslation: 0.472
+semantic: 0.387
+other: 0.345
+network: 0.323
+TCG: 0.285
+kernel virtual machine: 0.259
+socket: 0.244
+debug: 0.225
+PID: 0.214
+vnc: 0.204
+files: 0.202
+register: 0.150
+permissions: 0.141
+risc-v: 0.123
+arm: 0.093
+assembly: 0.078
+boot: 0.070
+
+[BUG][CPU hot-plug]CPU hot-plugs cause the qemu process to coredump
+
+Hello,Recently, when I was developing CPU hot-plugs under the loongarch
+architecture,
+I found that there was a problem with qemu cpu hot-plugs under x86
+architecture,
+which caused the qemu process coredump when repeatedly inserting and
+unplugging
+the CPU when the TCG was accelerated.
+
+
+The specific operation process is as follows:
+
+1.Use the following command to start the virtual machine
+
+qemu-system-x86_64 \
+-machine q35  \
+-cpu Broadwell-IBRS \
+-smp 1,maxcpus=4,sockets=4,cores=1,threads=1 \
+-m 4G \
+-drive file=~/anolis-8.8.qcow2  \
+-serial stdio   \
+-monitor telnet:localhost:4498,server,nowait
+
+
+2.Enter QEMU Monitor via telnet for repeated CPU insertion and unplugging
+
+telnet 127.0.0.1 4498
+(qemu) device_add
+Broadwell-IBRS-x86_64-cpu,socket-id=1,core-id=0,thread-id=0,id=cpu1
+(qemu) device_del cpu1
+(qemu) device_add
+Broadwell-IBRS-x86_64-cpu,socket-id=1,core-id=0,thread-id=0,id=cpu1
+3.You will notice that the QEMU process has a coredump
+
+# malloc(): unsorted double linked list corrupted
+Aborted (core dumped)
+
diff --git a/results/classifier/zero-shot/012/x86/43643137 b/results/classifier/zero-shot/012/x86/43643137
new file mode 100644
index 000000000..34aa82585
--- /dev/null
+++ b/results/classifier/zero-shot/012/x86/43643137
@@ -0,0 +1,556 @@
+x86: 0.791
+performance: 0.784
+other: 0.781
+debug: 0.775
+register: 0.767
+risc-v: 0.765
+semantic: 0.764
+device: 0.760
+permissions: 0.755
+arm: 0.747
+PID: 0.742
+vnc: 0.742
+TCG: 0.737
+assembly: 0.727
+graphic: 0.721
+network: 0.709
+architecture: 0.699
+kernel virtual machine: 0.681
+socket: 0.674
+mistranslation: 0.665
+boot: 0.652
+files: 0.612
+
+[Qemu-devel] [BUG/RFC] INIT IPI lost when VM starts
+
+Hi,
+We encountered a problem that when a domain starts, seabios failed to online a 
+vCPU.
+
+After investigation, we found that the reason is in kvm-kmod, KVM_APIC_INIT bit 
+in
+vcpu->arch.apic->pending_events was overwritten by qemu, and thus an INIT IPI 
+sent
+to AP was lost. Qemu does this since libvirtd sends a ‘query-cpus’ qmp command 
+to qemu
+on VM start.
+
+In qemu, qmp_query_cpus-> cpu_synchronize_state-> kvm_cpu_synchronize_state->
+do_kvm_cpu_synchronize_state, qemu gets registers/vcpu_events from kvm-kmod and
+sets cpu->kvm_vcpu_dirty to true, and vcpu thread in qemu will call
+kvm_arch_put_registers if cpu->kvm_vcpu_dirty is true, thus pending_events is
+overwritten by qemu.
+
+I think there is no need for qemu to set cpu->kvm_vcpu_dirty to true after 
+‘query-cpus’,
+and  kvm-kmod should not clear KVM_APIC_INIT unconditionally. And I am not sure 
+whether
+it is OK for qemu to set cpu->kvm_vcpu_dirty in do_kvm_cpu_synchronize_state in 
+each caller.
+
+What’s your opinion?
+
+Let me clarify it more clearly. Time sequence is that qemu handles ‘query-cpus’ qmp 
+command, vcpu 1 (and vcpu 0) got registers from kvm-kmod (qmp_query_cpus-> 
+cpu_synchronize_state-> kvm_cpu_synchronize_state->
+> do_kvm_cpu_synchronize_state-> kvm_arch_get_registers), then vcpu 0 (BSP) 
+sends INIT-SIPI to vcpu 1(AP). In kvm-kmod, vcpu 1’s pending_events’s KVM_APIC_INIT 
+bit set.
+Then vcpu 1 continue running, vcpu1 thread in qemu calls 
+kvm_arch_put_registers-> kvm_put_vcpu_events, so KVM_APIC_INIT bit in vcpu 1’s 
+pending_events got cleared, i.e., lost.
+
+In kvm-kmod, except for pending_events, sipi_vector may also be overwritten., 
+so I am not sure if there are other fields/registers in danger, i.e., those may 
+be modified asynchronously with vcpu thread itself.
+
+BTW, using a sleep like following can reliably reproduce this problem, if VM 
+equipped with more than 2 vcpus and starting VM using libvirtd.
+
+diff --git a/target/i386/kvm.c b/target/i386/kvm.c
+index 55865db..5099290 100644
+--- a/target/i386/kvm.c
++++ b/target/i386/kvm.c
+@@ -2534,6 +2534,11 @@ static int kvm_put_vcpu_events(X86CPU *cpu, int level)
+             KVM_VCPUEVENT_VALID_NMI_PENDING | KVM_VCPUEVENT_VALID_SIPI_VECTOR;
+     }
+
++    if (CPU(cpu)->cpu_index == 1) {
++        fprintf(stderr, "vcpu 1 sleep!!!!\n");
++        sleep(10);
++    }
++
+     return kvm_vcpu_ioctl(CPU(cpu), KVM_SET_VCPU_EVENTS, &events);
+ }
+
+
+On 2017/3/20 22:21, Herongguang (Stephen) wrote:
+Hi,
+We encountered a problem that when a domain starts, seabios failed to online a 
+vCPU.
+
+After investigation, we found that the reason is in kvm-kmod, KVM_APIC_INIT bit 
+in
+vcpu->arch.apic->pending_events was overwritten by qemu, and thus an INIT IPI 
+sent
+to AP was lost. Qemu does this since libvirtd sends a ‘query-cpus’ qmp command 
+to qemu
+on VM start.
+
+In qemu, qmp_query_cpus-> cpu_synchronize_state-> kvm_cpu_synchronize_state->
+do_kvm_cpu_synchronize_state, qemu gets registers/vcpu_events from kvm-kmod and
+sets cpu->kvm_vcpu_dirty to true, and vcpu thread in qemu will call
+kvm_arch_put_registers if cpu->kvm_vcpu_dirty is true, thus pending_events is
+overwritten by qemu.
+
+I think there is no need for qemu to set cpu->kvm_vcpu_dirty to true after 
+‘query-cpus’,
+and  kvm-kmod should not clear KVM_APIC_INIT unconditionally. And I am not sure 
+whether
+it is OK for qemu to set cpu->kvm_vcpu_dirty in do_kvm_cpu_synchronize_state in 
+each caller.
+
+What’s your opinion?
+
+On 20/03/2017 15:21, Herongguang (Stephen) wrote:
+>
+>
+We encountered a problem that when a domain starts, seabios failed to
+>
+online a vCPU.
+>
+>
+After investigation, we found that the reason is in kvm-kmod,
+>
+KVM_APIC_INIT bit in
+>
+vcpu->arch.apic->pending_events was overwritten by qemu, and thus an
+>
+INIT IPI sent
+>
+to AP was lost. Qemu does this since libvirtd sends a ‘query-cpus’ qmp
+>
+command to qemu
+>
+on VM start.
+>
+>
+In qemu, qmp_query_cpus-> cpu_synchronize_state->
+>
+kvm_cpu_synchronize_state->
+>
+do_kvm_cpu_synchronize_state, qemu gets registers/vcpu_events from
+>
+kvm-kmod and
+>
+sets cpu->kvm_vcpu_dirty to true, and vcpu thread in qemu will call
+>
+kvm_arch_put_registers if cpu->kvm_vcpu_dirty is true, thus
+>
+pending_events is
+>
+overwritten by qemu.
+>
+>
+I think there is no need for qemu to set cpu->kvm_vcpu_dirty to true
+>
+after ‘query-cpus’,
+>
+and  kvm-kmod should not clear KVM_APIC_INIT unconditionally. And I am
+>
+not sure whether
+>
+it is OK for qemu to set cpu->kvm_vcpu_dirty in
+>
+do_kvm_cpu_synchronize_state in each caller.
+>
+>
+What’s your opinion?
+Hi Rongguang,
+
+sorry for the late response.
+
+Where exactly is KVM_APIC_INIT dropped?  kvm_get_mp_state does clear the
+bit, but the result of the INIT is stored in mp_state.
+
+kvm_get_vcpu_events is called after kvm_get_mp_state; it retrieves
+KVM_APIC_INIT in events.smi.latched_init and kvm_set_vcpu_events passes
+it back.  Maybe it should ignore events.smi.latched_init if not in SMM,
+but I would like to understand the exact sequence of events.
+
+Thanks,
+
+paolo
+
+On 2017/4/6 0:16, Paolo Bonzini wrote:
+On 20/03/2017 15:21, Herongguang (Stephen) wrote:
+We encountered a problem that when a domain starts, seabios failed to
+online a vCPU.
+
+After investigation, we found that the reason is in kvm-kmod,
+KVM_APIC_INIT bit in
+vcpu->arch.apic->pending_events was overwritten by qemu, and thus an
+INIT IPI sent
+to AP was lost. Qemu does this since libvirtd sends a ‘query-cpus’ qmp
+command to qemu
+on VM start.
+
+In qemu, qmp_query_cpus-> cpu_synchronize_state->
+kvm_cpu_synchronize_state->
+do_kvm_cpu_synchronize_state, qemu gets registers/vcpu_events from
+kvm-kmod and
+sets cpu->kvm_vcpu_dirty to true, and vcpu thread in qemu will call
+kvm_arch_put_registers if cpu->kvm_vcpu_dirty is true, thus
+pending_events is
+overwritten by qemu.
+
+I think there is no need for qemu to set cpu->kvm_vcpu_dirty to true
+after ‘query-cpus’,
+and  kvm-kmod should not clear KVM_APIC_INIT unconditionally. And I am
+not sure whether
+it is OK for qemu to set cpu->kvm_vcpu_dirty in
+do_kvm_cpu_synchronize_state in each caller.
+
+What’s your opinion?
+Hi Rongguang,
+
+sorry for the late response.
+
+Where exactly is KVM_APIC_INIT dropped?  kvm_get_mp_state does clear the
+bit, but the result of the INIT is stored in mp_state.
+It's dropped in KVM_SET_VCPU_EVENTS, see below.
+kvm_get_vcpu_events is called after kvm_get_mp_state; it retrieves
+KVM_APIC_INIT in events.smi.latched_init and kvm_set_vcpu_events passes
+it back.  Maybe it should ignore events.smi.latched_init if not in SMM,
+but I would like to understand the exact sequence of events.
+time0:
+vcpu1:
+qmp_query_cpus-> cpu_synchronize_state-> kvm_cpu_synchronize_state->
+> do_kvm_cpu_synchronize_state(and set vcpu1's cpu->kvm_vcpu_dirty to true)-> 
+kvm_arch_get_registers(KVM_APIC_INIT bit in vcpu->arch.apic->pending_events was not set)
+
+time1:
+vcpu0:
+send INIT-SIPI to all AP->(in vcpu 0's context)__apic_accept_irq(KVM_APIC_INIT bit 
+in vcpu1's arch.apic->pending_events is set)
+
+time2:
+vcpu1:
+kvm_cpu_exec->(if cpu->kvm_vcpu_dirty is 
+true)kvm_arch_put_registers->kvm_put_vcpu_events(overwritten KVM_APIC_INIT bit in 
+vcpu->arch.apic->pending_events!)
+
+So it's a race between vcpu1 get/put registers with kvm/other vcpus changing 
+vcpu1's status/structure fields in the mean time, I am in worry of if there are 
+other fields may be overwritten,
+sipi_vector is one.
+
+also see:
+https://www.mail-archive.com/address@hidden/msg438675.html
+Thanks,
+
+paolo
+
+.
+
+Hi Paolo,
+
+What's your opinion about this patch? We found it just before finishing patches 
+for the past two days.
+
+
+Thanks,
+-Gonglei
+
+
+>
+-----Original Message-----
+>
+From: address@hidden [
+mailto:address@hidden
+On
+>
+Behalf Of Herongguang (Stephen)
+>
+Sent: Thursday, April 06, 2017 9:47 AM
+>
+To: Paolo Bonzini; address@hidden; address@hidden;
+>
+address@hidden; address@hidden; address@hidden;
+>
+wangxin (U); Huangweidong (C)
+>
+Subject: Re: [BUG/RFC] INIT IPI lost when VM starts
+>
+>
+>
+>
+On 2017/4/6 0:16, Paolo Bonzini wrote:
+>
+>
+>
+> On 20/03/2017 15:21, Herongguang (Stephen) wrote:
+>
+>> We encountered a problem that when a domain starts, seabios failed to
+>
+>> online a vCPU.
+>
+>>
+>
+>> After investigation, we found that the reason is in kvm-kmod,
+>
+>> KVM_APIC_INIT bit in
+>
+>> vcpu->arch.apic->pending_events was overwritten by qemu, and thus an
+>
+>> INIT IPI sent
+>
+>> to AP was lost. Qemu does this since libvirtd sends a ‘query-cpus’ qmp
+>
+>> command to qemu
+>
+>> on VM start.
+>
+>>
+>
+>> In qemu, qmp_query_cpus-> cpu_synchronize_state->
+>
+>> kvm_cpu_synchronize_state->
+>
+>> do_kvm_cpu_synchronize_state, qemu gets registers/vcpu_events from
+>
+>> kvm-kmod and
+>
+>> sets cpu->kvm_vcpu_dirty to true, and vcpu thread in qemu will call
+>
+>> kvm_arch_put_registers if cpu->kvm_vcpu_dirty is true, thus
+>
+>> pending_events is
+>
+>> overwritten by qemu.
+>
+>>
+>
+>> I think there is no need for qemu to set cpu->kvm_vcpu_dirty to true
+>
+>> after ‘query-cpus’,
+>
+>> and  kvm-kmod should not clear KVM_APIC_INIT unconditionally. And I am
+>
+>> not sure whether
+>
+>> it is OK for qemu to set cpu->kvm_vcpu_dirty in
+>
+>> do_kvm_cpu_synchronize_state in each caller.
+>
+>>
+>
+>> What’s your opinion?
+>
+> Hi Rongguang,
+>
+>
+>
+> sorry for the late response.
+>
+>
+>
+> Where exactly is KVM_APIC_INIT dropped?  kvm_get_mp_state does clear
+>
+the
+>
+> bit, but the result of the INIT is stored in mp_state.
+>
+>
+It's dropped in KVM_SET_VCPU_EVENTS, see below.
+>
+>
+>
+>
+> kvm_get_vcpu_events is called after kvm_get_mp_state; it retrieves
+>
+> KVM_APIC_INIT in events.smi.latched_init and kvm_set_vcpu_events passes
+>
+> it back.  Maybe it should ignore events.smi.latched_init if not in SMM,
+>
+> but I would like to understand the exact sequence of events.
+>
+>
+time0:
+>
+vcpu1:
+>
+qmp_query_cpus-> cpu_synchronize_state-> kvm_cpu_synchronize_state->
+>
+> do_kvm_cpu_synchronize_state(and set vcpu1's cpu->kvm_vcpu_dirty to
+>
+true)-> kvm_arch_get_registers(KVM_APIC_INIT bit in
+>
+vcpu->arch.apic->pending_events was not set)
+>
+>
+time1:
+>
+vcpu0:
+>
+send INIT-SIPI to all AP->(in vcpu 0's
+>
+context)__apic_accept_irq(KVM_APIC_INIT bit in vcpu1's
+>
+arch.apic->pending_events is set)
+>
+>
+time2:
+>
+vcpu1:
+>
+kvm_cpu_exec->(if cpu->kvm_vcpu_dirty is
+>
+true)kvm_arch_put_registers->kvm_put_vcpu_events(overwritten
+>
+KVM_APIC_INIT bit in vcpu->arch.apic->pending_events!)
+>
+>
+So it's a race between vcpu1 get/put registers with kvm/other vcpus changing
+>
+vcpu1's status/structure fields in the mean time, I am in worry of if there
+>
+are
+>
+other fields may be overwritten,
+>
+sipi_vector is one.
+>
+>
+also see:
+>
+https://www.mail-archive.com/address@hidden/msg438675.html
+>
+>
+> Thanks,
+>
+>
+>
+> paolo
+>
+>
+>
+> .
+>
+>
+>
+
+2017-11-20 06:57+0000, Gonglei (Arei):
+>
+Hi Paolo,
+>
+>
+What's your opinion about this patch? We found it just before finishing
+>
+patches
+>
+for the past two days.
+I think your case was fixed by f4ef19108608 ("KVM: X86: Fix loss of
+pending INIT due to race"), but that patch didn't fix it perfectly, so
+maybe you're hitting a similar case that happens in SMM ...
+
+>
+> -----Original Message-----
+>
+> From: address@hidden [
+mailto:address@hidden
+On
+>
+> Behalf Of Herongguang (Stephen)
+>
+> On 2017/4/6 0:16, Paolo Bonzini wrote:
+>
+> > Hi Rongguang,
+>
+> >
+>
+> > sorry for the late response.
+>
+> >
+>
+> > Where exactly is KVM_APIC_INIT dropped?  kvm_get_mp_state does clear
+>
+> the
+>
+> > bit, but the result of the INIT is stored in mp_state.
+>
+>
+>
+> It's dropped in KVM_SET_VCPU_EVENTS, see below.
+>
+>
+>
+> >
+>
+> > kvm_get_vcpu_events is called after kvm_get_mp_state; it retrieves
+>
+> > KVM_APIC_INIT in events.smi.latched_init and kvm_set_vcpu_events passes
+>
+> > it back.  Maybe it should ignore events.smi.latched_init if not in SMM,
+>
+> > but I would like to understand the exact sequence of events.
+>
+>
+>
+> time0:
+>
+> vcpu1:
+>
+> qmp_query_cpus-> cpu_synchronize_state-> kvm_cpu_synchronize_state->
+>
+>  > do_kvm_cpu_synchronize_state(and set vcpu1's cpu->kvm_vcpu_dirty to
+>
+> true)-> kvm_arch_get_registers(KVM_APIC_INIT bit in
+>
+> vcpu->arch.apic->pending_events was not set)
+>
+>
+>
+> time1:
+>
+> vcpu0:
+>
+> send INIT-SIPI to all AP->(in vcpu 0's
+>
+> context)__apic_accept_irq(KVM_APIC_INIT bit in vcpu1's
+>
+> arch.apic->pending_events is set)
+>
+>
+>
+> time2:
+>
+> vcpu1:
+>
+> kvm_cpu_exec->(if cpu->kvm_vcpu_dirty is
+>
+> true)kvm_arch_put_registers->kvm_put_vcpu_events(overwritten
+>
+> KVM_APIC_INIT bit in vcpu->arch.apic->pending_events!)
+>
+>
+>
+> So it's a race between vcpu1 get/put registers with kvm/other vcpus changing
+>
+> vcpu1's status/structure fields in the mean time, I am in worry of if there
+>
+> are
+>
+> other fields may be overwritten,
+>
+> sipi_vector is one.
+Fields that can be asynchronously written by other VCPUs (like SIPI,
+NMI) must not be SET if other VCPUs were not paused since the last GET.
+(Looking at the interface, we can currently lose pending SMI.)
+
+INIT is one of the restricted fields, but the API unconditionally
+couples SMM with latched INIT, which means that we can lose an INIT if
+the VCPU is in SMM mode -- do you see SMM in kvm_vcpu_events?
+
+Thanks.
+
diff --git a/results/classifier/zero-shot/012/x86/gitlab_semantic_addsubps b/results/classifier/zero-shot/012/x86/gitlab_semantic_addsubps
new file mode 100644
index 000000000..2bd8fd460
--- /dev/null
+++ b/results/classifier/zero-shot/012/x86/gitlab_semantic_addsubps
@@ -0,0 +1,46 @@
+x86: 0.995
+semantic: 0.974
+architecture: 0.868
+device: 0.758
+other: 0.732
+graphic: 0.700
+debug: 0.650
+performance: 0.552
+vnc: 0.544
+assembly: 0.531
+boot: 0.465
+permissions: 0.443
+socket: 0.426
+network: 0.393
+PID: 0.358
+register: 0.341
+mistranslation: 0.299
+risc-v: 0.293
+files: 0.280
+TCG: 0.255
+arm: 0.252
+kernel virtual machine: 0.225
+
+x86 SSE/SSE2/SSE3 instruction semantic bugs with NaN
+
+Description of problem
+The result of SSE/SSE2/SSE3 instructions with NaN is different from the CPU. From Intel manual Volume 1 Appendix D.4.2.2, they defined the behavior of such instructions with NaN. But I think QEMU did not implement this semantic exactly because the byte result is different.
+
+Steps to reproduce
+
+Compile this code
+
+void main() {
+    asm("mov rax, 0x000000007fffffff; push rax; mov rax, 0x00000000ffffffff; push rax; movdqu XMM1, [rsp];");
+    asm("mov rax, 0x2e711de7aa46af1a; push rax; mov rax, 0x7fffffff7fffffff; push rax; movdqu XMM2, [rsp];");
+    asm("addsubps xmm1, xmm2");
+}
+
+Execute and compare the result with the CPU. This problem happens with other SSE/SSE2/SSE3 instructions specified in the manual, Volume 1 Appendix D.4.2.2.
+
+CPU xmm1[3] = 0xffffffff
+
+QEMU xmm1[3] = 0x7fffffff
+
+Additional information
+This bug is discovered by research conducted by KAIST SoftSec.
diff --git a/results/classifier/zero-shot/012/x86/gitlab_semantic_adox b/results/classifier/zero-shot/012/x86/gitlab_semantic_adox
new file mode 100644
index 000000000..02e68ea7d
--- /dev/null
+++ b/results/classifier/zero-shot/012/x86/gitlab_semantic_adox
@@ -0,0 +1,59 @@
+x86: 0.996
+semantic: 0.990
+assembly: 0.913
+architecture: 0.909
+graphic: 0.782
+device: 0.776
+debug: 0.706
+vnc: 0.663
+boot: 0.599
+socket: 0.556
+arm: 0.503
+risc-v: 0.501
+permissions: 0.500
+register: 0.494
+performance: 0.460
+mistranslation: 0.452
+network: 0.426
+files: 0.374
+PID: 0.343
+other: 0.286
+kernel virtual machine: 0.253
+TCG: 0.244
+
+x86 ADOX and ADCX semantic bug
+Description of problem
+The result of instruction ADOX and ADCX are different from the CPU. The value of one of EFLAGS is different.
+
+Steps to reproduce
+
+Compile this code
+
+
+void main() {
+    asm("push 512; popfq;");
+    asm("mov rax, 0xffffffff84fdbf24");
+    asm("mov rbx, 0xb197d26043bec15d");
+    asm("adox eax, ebx");
+}
+
+
+
+Execute and compare the result with the CPU. This problem happens with ADCX, too (with CF).
+
+CPU
+
+OF = 0
+
+
+QEMU
+
+OF = 1
+
+
+
+
+
+
+Additional information
+This bug is discovered by research conducted by KAIST SoftSec.
diff --git a/results/classifier/zero-shot/012/x86/gitlab_semantic_bextr b/results/classifier/zero-shot/012/x86/gitlab_semantic_bextr
new file mode 100644
index 000000000..d7d64b9ec
--- /dev/null
+++ b/results/classifier/zero-shot/012/x86/gitlab_semantic_bextr
@@ -0,0 +1,48 @@
+x86: 0.997
+semantic: 0.993
+architecture: 0.859
+register: 0.846
+assembly: 0.800
+graphic: 0.790
+device: 0.717
+debug: 0.603
+boot: 0.516
+vnc: 0.471
+socket: 0.397
+risc-v: 0.396
+mistranslation: 0.337
+arm: 0.336
+PID: 0.234
+performance: 0.233
+network: 0.219
+permissions: 0.188
+kernel virtual machine: 0.127
+TCG: 0.109
+other: 0.099
+files: 0.099
+
+x86 BEXTR semantic bug
+Description of problem
+The result of instruction BEXTR is different with from the CPU. The value of destination register is different. I think QEMU does not consider the operand size limit.
+
+Steps to reproduce
+
+Compile this code
+
+void main() {
+    asm("mov rax, 0x17b3693f77fb6e9");
+    asm("mov rbx, 0x8f635a775ad3b9b4");
+    asm("mov rcx, 0xb717b75da9983018");
+    asm("bextr eax, ebx, ecx");
+}
+
+Execute and compare the result with the CPU.
+
+CPU
+RAX = 0x5a
+
+QEMU
+RAX = 0x635a775a
+
+Additional information
+This bug is discovered by research conducted by KAIST SoftSec.
diff --git a/results/classifier/zero-shot/012/x86/gitlab_semantic_blsi b/results/classifier/zero-shot/012/x86/gitlab_semantic_blsi
new file mode 100644
index 000000000..9ccd61616
--- /dev/null
+++ b/results/classifier/zero-shot/012/x86/gitlab_semantic_blsi
@@ -0,0 +1,43 @@
+x86: 0.997
+semantic: 0.983
+architecture: 0.887
+graphic: 0.873
+assembly: 0.852
+device: 0.790
+socket: 0.764
+vnc: 0.756
+arm: 0.685
+boot: 0.678
+network: 0.672
+performance: 0.656
+risc-v: 0.651
+files: 0.633
+permissions: 0.619
+other: 0.609
+mistranslation: 0.606
+TCG: 0.538
+debug: 0.525
+PID: 0.488
+register: 0.488
+kernel virtual machine: 0.388
+
+x86 BLSI and BLSR semantic bug
+Description of problem
+The result of instruction BLSI and BLSR is different from the CPU. The value of CF is different.
+
+Steps to reproduce
+
+Compile this code
+
+
+void main() {
+    asm("blsi rax, rbx");
+}
+
+
+
+Execute and compare the result with the CPU. The value of CF is exactly the opposite. This problem happens with BLSR, too.
+
+
+Additional information
+This bug is discovered by research conducted by KAIST SoftSec.
diff --git a/results/classifier/zero-shot/012/x86/gitlab_semantic_blsmsk b/results/classifier/zero-shot/012/x86/gitlab_semantic_blsmsk
new file mode 100644
index 000000000..19ed1e10e
--- /dev/null
+++ b/results/classifier/zero-shot/012/x86/gitlab_semantic_blsmsk
@@ -0,0 +1,50 @@
+x86: 0.997
+semantic: 0.987
+architecture: 0.935
+assembly: 0.879
+device: 0.743
+graphic: 0.735
+vnc: 0.612
+socket: 0.607
+mistranslation: 0.603
+boot: 0.585
+register: 0.568
+arm: 0.512
+risc-v: 0.501
+debug: 0.428
+permissions: 0.370
+network: 0.366
+files: 0.340
+PID: 0.289
+other: 0.269
+performance: 0.268
+TCG: 0.254
+kernel virtual machine: 0.130
+
+x86 BLSMSK semantic bug
+Description of problem
+The result of instruction BLSMSK is different with from the CPU. The value of CF is different.
+
+Steps to reproduce
+
+Compile this code
+
+void main() {
+    asm("mov rax, 0x65b2e276ad27c67");
+    asm("mov rbx, 0x62f34955226b2b5d");
+    asm("blsmsk eax, ebx");
+}
+
+Execute and compare the result with the CPU.
+
+CPU
+
+CF = 0
+
+
+QEMU
+
+CF = 1
+
+Additional information
+This bug is discovered by research conducted by KAIST SoftSec.
diff --git a/results/classifier/zero-shot/012/x86/gitlab_semantic_bzhi b/results/classifier/zero-shot/012/x86/gitlab_semantic_bzhi
new file mode 100644
index 000000000..1b8d0dc4e
--- /dev/null
+++ b/results/classifier/zero-shot/012/x86/gitlab_semantic_bzhi
@@ -0,0 +1,61 @@
+x86: 0.983
+semantic: 0.920
+graphic: 0.652
+device: 0.589
+architecture: 0.518
+register: 0.406
+assembly: 0.406
+arm: 0.318
+vnc: 0.287
+debug: 0.256
+boot: 0.220
+risc-v: 0.211
+network: 0.203
+socket: 0.198
+mistranslation: 0.171
+performance: 0.165
+PID: 0.130
+permissions: 0.069
+other: 0.064
+kernel virtual machine: 0.064
+TCG: 0.053
+files: 0.051
+
+x86 BZHI semantic bug
+Description of problem
+The result of instruction BZHI is different from the CPU. The value of destination register and SF of EFLAGS are different.
+
+Steps to reproduce
+
+Compile this code
+
+
+void main() {
+    asm("mov rax, 0xb1aa9da2fe33fe3");
+    asm("mov rbx, 0x80000000ffffffff");
+    asm("mov rcx, 0xf3fce8829b99a5c6");
+    asm("bzhi rax, rbx, rcx");
+}
+
+
+
+Execute and compare the result with the CPU.
+
+CPU
+
+RAX = 0x0x80000000ffffffff
+SF = 1
+
+
+QEMU
+
+RAX = 0xffffffff
+SF = 0
+
+
+
+
+
+
+Additional information
+This bug is discovered by research conducted by KAIST SoftSec.