summary refs log tree commit diff stats
path: root/python/qemu/utils/qemu_ga_client.py (unfollow)
Commit message (Collapse)AuthorFilesLines
2025-07-14i386/cpu: Fix overflow of cache topology fields in CPUID.04HQian Wen1-5/+11
According to SDM, CPUID.0x4:EAX[31:26] indicates the Maximum number of addressable IDs for processor cores in the physical package. If we launch over 64 cores VM, the 6-bit field will overflow, and the wrong core_id number will be reported. Since the HW reports 0x3f when the intel processor has over 64 cores, limit the max value written to EAX[31:26] to 63, so max num_cores should be 64. For EAX[14:25], though at present Q35 supports up to 4096 CPUs, by constructing a specific topology, the width of the APIC ID can be extended beyond 12 bits. For example, using `-smp threads=33,cores=9, modules=9` results in a die level offset of 6 + 4 + 4 = 14 bits, which can also cause overflow. check and honor the maximum value for EAX[14:25] as well. In addition, for host-cache-info case, also apply the same checks and fixes. Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com> Signed-off-by: Qian Wen <qian.wen@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250714080859.1960104-7-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-14i386/cpu: Fix cpu number overflow in CPUID.01H.EBX[23:16]Qian Wen1-2/+7
The legacy topology enumerated by CPUID.1.EBX[23:16] is defined in SDM Vol2: Bits 23-16: Maximum number of addressable IDs for logical processors in this physical package. When threads_per_socket > 255, it will 1) overwrite bits[31:24] which is apic_id, 2) bits [23:16] get truncated. Specifically, if launching the VM with -smp 256, the value written to EBX[23:16] is 0 because of data overflow. If the guest only supports legacy topology, without V2 Extended Topology enumerated by CPUID.0x1f or Extended Topology enumerated by CPUID.0x0b to support over 255 CPUs, the return of the kernel invoking cpu_smt_allowed() is false and APs (application processors) will fail to bring up. Then only CPU 0 is online, and others are offline. For example, launch VM via: qemu-system-x86_64 -M q35,accel=kvm,kernel-irqchip=split \ -cpu qemu64,cpuid-0xb=off -smp 256 -m 32G \ -drive file=guest.img,if=none,id=virtio-disk0,format=raw \ -device virtio-blk-pci,drive=virtio-disk0,bootindex=1 --nographic The guest shows: CPU(s): 256 On-line CPU(s) list: 0 Off-line CPU(s) list: 1-255 To avoid this issue caused by overflow, limit the max value written to EBX[23:16] to 255 as the HW does. Cc: qemu-stable@nongnu.org Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com> Signed-off-by: Qian Wen <qian.wen@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250714080859.1960104-6-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-14i386/cpu: Fix number of addressable IDs field for CPUID.01H.EBX[23:16]Chuang Xu1-1/+11
When QEMU is started with: -cpu host,migratable=on,host-cache-info=on,l3-cache=off -smp 180,sockets=2,dies=1,cores=45,threads=2 On Intel platform: CPUID.01H.EBX[23:16] is defined as "max number of addressable IDs for logical processors in the physical package". When executing "cpuid -1 -l 1 -r" in the guest, we obtain a value of 90 for CPUID.01H.EBX[23:16], whereas the expected value is 128. Additionally, executing "cpuid -1 -l 4 -r" in the guest yields a value of 63 for CPUID.04H.EAX[31:26], which matches the expected result. As (1+CPUID.04H.EAX[31:26]) rounds up to the nearest power-of-2 integer, it's necessary to round up CPUID.01H.EBX[23:16] to the nearest power-of-2 integer too. Otherwise there would be unexpected results in guest with older kernel. For example, when QEMU is started with CLI above and xtopology is disabled, guest kernel 5.15.120 uses CPUID.01H.EBX[23:16]/(1+CPUID.04H.EAX[31:26]) to calculate threads-per-core in detect_ht(). Then guest will get "90/(1+63)=1" as the result, even though threads-per-core should actually be 2. And on AMD platform: CPUID.01H.EBX[23:16] is defined as "Logical processor count". Current result meets our expectation. So round up CPUID.01H.EBX[23:16] to the nearest power-of-2 integer only for Intel platform to solve the unexpected result. Use the "x-vendor-cpuid-only-v2" compat option to fix this issue. Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Signed-off-by: Guixiong Wei <weiguixiong@bytedance.com> Signed-off-by: Yipeng Yin <yinyipeng@bytedance.com> Signed-off-by: Chuang Xu <xuchuangxclwt@bytedance.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250714080859.1960104-5-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-14i386/cpu: Reorder CPUID leaves in cpu_x86_cpuid()Zhao Liu1-30/+30
Sort the CPUID leaves strictly by index to facilitate checking and changing. Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Reviewed-by: Tao Su <tao1.su@linux.intel.com> Link: https://lore.kernel.org/r/20250627035129.2755537-5-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-14tests/vm: bump FreeBSD image to 14.3Paolo Bonzini1-2/+2
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-14tests/functional: test_x86_cpu_model_versions: remove dead testsPaolo Bonzini1-98/+12
Tests that require machines older than 4.2 are now unconditionally skipped. Remove them if they test legacy behavior, or use the latest machine if they test current behavior. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-14i386/cpu: Mark CPUID 0x80000008 ECX bits[0:7] & [12:15] as reserved for ↵Zhao Liu1-0/+11
Intel/Zhaoxin Per SDM, 80000008H EAX Linear/Physical Address size. Bits 07-00: #Physical Address Bits*. Bits 15-08: #Linear Address Bits. Bits 31-16: Reserved = 0. EBX Bits 08-00: Reserved = 0. Bit 09: WBNOINVD is available if 1. Bits 31-10: Reserved = 0. ECX Reserved = 0. EDX Reserved = 0. ECX/EDX in CPUID 0x80000008 leaf are reserved. Currently, in QEMU, only ECX bits[0:7] and ECX bits[12:15] are encoded, and both are emulated in QEMU. Considering that Intel and Zhaoxin are already using the 0x1f leaf to describe CPU topology, which includes similar information, Intel and Zhaoxin will not implement ECX bits[0:7] and bits[12:15] of 0x80000008. Therefore, mark these two fields as reserved and clear them for Intel and Zhaoxin guests. Reviewed-by: Tao Su <tao1.su@linux.intel.com> Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250714080859.1960104-3-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-14i386/cpu: Mark CPUID 0x80000007[EBX] as reserved for IntelZhao Liu1-1/+5
Per SDM, 80000007H EAX Reserved = 0. EBX Reserved = 0. ECX Reserved = 0. EDX Bits 07-00: Reserved = 0. Bit 08: Invariant TSC available if 1. Bits 31-09: Reserved = 0. EAX/EBX/ECX in CPUID 0x80000007 leaf are reserved for Intel. At present, EAX is reserved for AMD, too. And AMD hasn't used ECX in QEMU. So these 2 registers are both left as 0. Therefore, only fix the EBX and excode it as 0 for Intel. Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Reviewed-by: Tao Su <tao1.su@linux.intel.com> Link: https://lore.kernel.org/r/20250627035129.2755537-3-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-14i386/cpu: Mark EBX/ECX/EDX in CPUID 0x80000000 leaf as reserved for IntelZhao Liu1-3/+9
Per SDM, 80000000H EAX Maximum Input Value for Extended Function CPUID Information. EBX Reserved. ECX Reserved. EDX Reserved. EBX/ECX/EDX in CPUID 0x80000000 leaf are reserved. Intel is using 0x0 leaf to encode vendor. Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Reviewed-by: Tao Su <tao1.su@linux.intel.com> Link: https://lore.kernel.org/r/20250627035129.2755537-2-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Enable 0x1f leaf for YongFeng by defaultZhao Liu1-1/+5
Host YongFeng CPU has 0x1f leaf by default, so that enable it for Guest CPU by default as well. Suggested-by: Ewan Hai <ewanhai-oc@zhaoxin.com> Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711104603.1634832-10-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Enable 0x1f leaf for SapphireRapids by defaultZhao Liu1-1/+5
Host SapphireRapids CPU has 0x1f leaf by default, so that enable it for Guest CPU by default as well. Suggested-by: Igor Mammedov <imammedo@redhat.com> Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711104603.1634832-9-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Enable 0x1f leaf for GraniteRapids by defaultZhao Liu1-1/+5
Host GraniteRapids CPU has 0x1f leaf by default, so that enable it for Guest CPU by default as well. Suggested-by: Igor Mammedov <imammedo@redhat.com> Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711104603.1634832-8-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Enable 0x1f leaf for SierraForest by defaultZhao Liu1-0/+1
Host SierraForest CPU has 0x1f leaf by default, so that enable it for Guest CPU by default as well. Suggested-by: Igor Mammedov <imammedo@redhat.com> Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711104603.1634832-7-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Enable 0x1f leaf for SierraForest by defaultZhao Liu1-1/+4
Host SierraForest CPU has 0x1f leaf by default, so that enable it for Guest CPU by default as well. Suggested-by: Igor Mammedov <imammedo@redhat.com> Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711104603.1634832-7-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Add a "x-force-cpuid-0x1f" propertyManish Mishra1-0/+1
Add a "x-force-cpuid-0x1f" property so that CPU models can enable it and have 0x1f CPUID leaf natually as the Host CPU. The advantage is that when the CPU model's cache model is already consistent with the Host CPU, for example, SRF defaults to l2 per module & l3 per package, 0x1f can better help users identify the topology in the VM. Adding 0x1f for specific CPU models should not cause any trouble in principle. This property is only enabled for CPU models that already have 0x1f leaf on the Host, so software that originally runs normally on the Host won't encounter issues in the Guest with corresponding CPU model. Conversely, some software that relies on checking 0x1f might have problems in the Guest due to the lack of 0x1f [*]. In summary, adding 0x1f is also intended to further emulate the Host CPU environment. [*]: https://lore.kernel.org/qemu-devel/PH0PR02MB738410511BF51B12DB09BE6CF6AC2@PH0PR02MB7384.namprd02.prod.outlook.com/ Signed-off-by: Manish Mishra <manish.mishra@nutanix.com> Co-authored-by: Xiaoyao Li <xiaoyao.li@intel.com> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com> [Integrated and rebased 2 previous patches (ordered by post time)] Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711104603.1634832-6-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Introduce cache model for YongFengEwan Hai1-0/+104
Add the cache model to YongFeng (v3) to better emulate its environment. Note, although YongFeng v2 was added after v10.0, it was also back ported to v10.0.2. Therefore, the new version (v3) is needed to avoid conflict. The cache model is as follows: --- cache 0 --- cache type = data cache (1) cache level = 0x1 (1) self-initializing cache level = true fully associative cache = false maximum IDs for CPUs sharing cache = 0x0 (0) maximum IDs for cores in pkg = 0x0 (0) system coherency line size = 0x40 (64) physical line partitions = 0x1 (1) ways of associativity = 0x8 (8) number of sets = 0x40 (64) WBINVD/INVD acts on lower caches = false inclusive to lower caches = false complex cache indexing = false number of sets (s) = 64 (size synth) = 32768 (32 KB) --- cache 1 --- cache type = instruction cache (2) cache level = 0x1 (1) self-initializing cache level = true fully associative cache = false maximum IDs for CPUs sharing cache = 0x0 (0) maximum IDs for cores in pkg = 0x0 (0) system coherency line size = 0x40 (64) physical line partitions = 0x1 (1) ways of associativity = 0x10 (16) number of sets = 0x40 (64) WBINVD/INVD acts on lower caches = false inclusive to lower caches = false complex cache indexing = false number of sets (s) = 64 (size synth) = 65536 (64 KB) --- cache 2 --- cache type = unified cache (3) cache level = 0x2 (2) self-initializing cache level = true fully associative cache = false maximum IDs for CPUs sharing cache = 0x0 (0) maximum IDs for cores in pkg = 0x0 (0) system coherency line size = 0x40 (64) physical line partitions = 0x1 (1) ways of associativity = 0x8 (8) number of sets = 0x200 (512) WBINVD/INVD acts on lower caches = false inclusive to lower caches = true complex cache indexing = false number of sets (s) = 512 (size synth) = 262144 (256 KB) --- cache 3 --- cache type = unified cache (3) cache level = 0x3 (3) self-initializing cache level = true fully associative cache = false maximum IDs for CPUs sharing cache = 0x0 (0) maximum IDs for cores in pkg = 0x0 (0) system coherency line size = 0x40 (64) physical line partitions = 0x1 (1) ways of associativity = 0x10 (16) number of sets = 0x2000 (8192) WBINVD/INVD acts on lower caches = true inclusive to lower caches = true complex cache indexing = false number of sets (s) = 8192 (size synth) = 8388608 (8 MB) --- cache 4 --- cache type = no more caches (0) Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Ewan Hai <ewanhai-oc@zhaoxin.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711104603.1634832-5-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Introduce cache model for SapphireRapidsZhao Liu1-0/+96
Add the cache model to SapphireRapids (v4) to better emulate its environment. The cache model is based on SapphireRapids-SP (Scalable Performance): --- cache 0 --- cache type = data cache (1) cache level = 0x1 (1) self-initializing cache level = true fully associative cache = false maximum IDs for CPUs sharing cache = 0x1 (1) maximum IDs for cores in pkg = 0x3f (63) system coherency line size = 0x40 (64) physical line partitions = 0x1 (1) ways of associativity = 0xc (12) number of sets = 0x40 (64) WBINVD/INVD acts on lower caches = false inclusive to lower caches = false complex cache indexing = false number of sets (s) = 64 (size synth) = 49152 (48 KB) --- cache 1 --- cache type = instruction cache (2) cache level = 0x1 (1) self-initializing cache level = true fully associative cache = false maximum IDs for CPUs sharing cache = 0x1 (1) maximum IDs for cores in pkg = 0x3f (63) system coherency line size = 0x40 (64) physical line partitions = 0x1 (1) ways of associativity = 0x8 (8) number of sets = 0x40 (64) WBINVD/INVD acts on lower caches = false inclusive to lower caches = false complex cache indexing = false number of sets (s) = 64 (size synth) = 32768 (32 KB) --- cache 2 --- cache type = unified cache (3) cache level = 0x2 (2) self-initializing cache level = true fully associative cache = false maximum IDs for CPUs sharing cache = 0x1 (1) maximum IDs for cores in pkg = 0x3f (63) system coherency line size = 0x40 (64) physical line partitions = 0x1 (1) ways of associativity = 0x10 (16) number of sets = 0x800 (2048) WBINVD/INVD acts on lower caches = false inclusive to lower caches = false complex cache indexing = false number of sets (s) = 2048 (size synth) = 2097152 (2 MB) --- cache 3 --- cache type = unified cache (3) cache level = 0x3 (3) self-initializing cache level = true fully associative cache = false maximum IDs for CPUs sharing cache = 0x7f (127) maximum IDs for cores in pkg = 0x3f (63) system coherency line size = 0x40 (64) physical line partitions = 0x1 (1) ways of associativity = 0xf (15) number of sets = 0x10000 (65536) WBINVD/INVD acts on lower caches = false inclusive to lower caches = false complex cache indexing = true number of sets (s) = 65536 (size synth) = 62914560 (60 MB) --- cache 4 --- cache type = no more caches (0) Suggested-by: Tejus GK <tejus.gk@nutanix.com> Suggested-by: Jason Zeng <jason.zeng@intel.com> Suggested-by: "Daniel P . Berrangé" <berrange@redhat.com> Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Reviewed-by: Tao Su <tao1.su@linux.intel.com> Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711104603.1634832-4-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Introduce cache model for GraniteRapidsZhao Liu1-0/+96
Add the cache model to GraniteRapids (v3) to better emulate its environment. The cache model is based on GraniteRapids-SP (Scalable Performance): --- cache 0 --- cache type = data cache (1) cache level = 0x1 (1) self-initializing cache level = true fully associative cache = false maximum IDs for CPUs sharing cache = 0x1 (1) maximum IDs for cores in pkg = 0x3f (63) system coherency line size = 0x40 (64) physical line partitions = 0x1 (1) ways of associativity = 0xc (12) number of sets = 0x40 (64) WBINVD/INVD acts on lower caches = false inclusive to lower caches = false complex cache indexing = false number of sets (s) = 64 (size synth) = 49152 (48 KB) --- cache 1 --- cache type = instruction cache (2) cache level = 0x1 (1) self-initializing cache level = true fully associative cache = false maximum IDs for CPUs sharing cache = 0x1 (1) maximum IDs for cores in pkg = 0x3f (63) system coherency line size = 0x40 (64) physical line partitions = 0x1 (1) ways of associativity = 0x10 (16) number of sets = 0x40 (64) WBINVD/INVD acts on lower caches = false inclusive to lower caches = false complex cache indexing = false number of sets (s) = 64 (size synth) = 65536 (64 KB) --- cache 2 --- cache type = unified cache (3) cache level = 0x2 (2) self-initializing cache level = true fully associative cache = false maximum IDs for CPUs sharing cache = 0x1 (1) maximum IDs for cores in pkg = 0x3f (63) system coherency line size = 0x40 (64) physical line partitions = 0x1 (1) ways of associativity = 0x10 (16) number of sets = 0x800 (2048) WBINVD/INVD acts on lower caches = false inclusive to lower caches = false complex cache indexing = false number of sets (s) = 2048 (size synth) = 2097152 (2 MB) --- cache 3 --- cache type = unified cache (3) cache level = 0x3 (3) self-initializing cache level = true fully associative cache = false maximum IDs for CPUs sharing cache = 0xff (255) maximum IDs for cores in pkg = 0x3f (63) system coherency line size = 0x40 (64) physical line partitions = 0x1 (1) ways of associativity = 0x10 (16) number of sets = 0x48000 (294912) WBINVD/INVD acts on lower caches = false inclusive to lower caches = false complex cache indexing = true number of sets (s) = 294912 (size synth) = 301989888 (288 MB) --- cache 4 --- cache type = no more caches (0) Suggested-by: Tejus GK <tejus.gk@nutanix.com> Suggested-by: Jason Zeng <jason.zeng@intel.com> Suggested-by: "Daniel P . Berrangé" <berrange@redhat.com> Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Reviewed-by: Tao Su <tao1.su@linux.intel.com> Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711104603.1634832-3-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Introduce cache model for SierraForestZhao Liu1-0/+96
Add the cache model to SierraForest (v3) to better emulate its environment. The cache model is based on SierraForest-SP (Scalable Performance): --- cache 0 --- cache type = data cache (1) cache level = 0x1 (1) self-initializing cache level = true fully associative cache = false maximum IDs for CPUs sharing cache = 0x0 (0) maximum IDs for cores in pkg = 0x3f (63) system coherency line size = 0x40 (64) physical line partitions = 0x1 (1) ways of associativity = 0x8 (8) number of sets = 0x40 (64) WBINVD/INVD acts on lower caches = false inclusive to lower caches = false complex cache indexing = false number of sets (s) = 64 (size synth) = 32768 (32 KB) --- cache 1 --- cache type = instruction cache (2) cache level = 0x1 (1) self-initializing cache level = true fully associative cache = false maximum IDs for CPUs sharing cache = 0x0 (0) maximum IDs for cores in pkg = 0x3f (63) system coherency line size = 0x40 (64) physical line partitions = 0x1 (1) ways of associativity = 0x8 (8) number of sets = 0x80 (128) WBINVD/INVD acts on lower caches = false inclusive to lower caches = false complex cache indexing = false number of sets (s) = 128 (size synth) = 65536 (64 KB) --- cache 2 --- cache type = unified cache (3) cache level = 0x2 (2) self-initializing cache level = true fully associative cache = false maximum IDs for CPUs sharing cache = 0x7 (7) maximum IDs for cores in pkg = 0x3f (63) system coherency line size = 0x40 (64) physical line partitions = 0x1 (1) ways of associativity = 0x10 (16) number of sets = 0x1000 (4096) WBINVD/INVD acts on lower caches = false inclusive to lower caches = false complex cache indexing = false number of sets (s) = 4096 (size synth) = 4194304 (4 MB) --- cache 3 --- cache type = unified cache (3) cache level = 0x3 (3) self-initializing cache level = true fully associative cache = false maximum IDs for CPUs sharing cache = 0x1ff (511) maximum IDs for cores in pkg = 0x3f (63) system coherency line size = 0x40 (64) physical line partitions = 0x1 (1) ways of associativity = 0xc (12) number of sets = 0x24000 (147456) WBINVD/INVD acts on lower caches = false inclusive to lower caches = false complex cache indexing = true number of sets (s) = 147456 (size synth) = 113246208 (108 MB) --- cache 4 --- cache type = no more caches (0) Suggested-by: Tejus GK <tejus.gk@nutanix.com> Suggested-by: Jason Zeng <jason.zeng@intel.com> Suggested-by: "Daniel P . Berrangé" <berrange@redhat.com> Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Reviewed-by: Tao Su <tao1.su@linux.intel.com> Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711104603.1634832-2-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Use a unified cache_info in X86CPUStateZhao Liu2-128/+27
At present, all cases using the cache model (CPUID 0x2, 0x4, 0x80000005, 0x80000006 and 0x8000001D leaves) have been verified to be able to select either cache_info_intel or cache_info_amd based on the vendor. Therefore, further merge cache_info_intel and cache_info_amd into a unified cache_info in X86CPUState, and during its initialization, set different legacy cache models based on the vendor. Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711102143.1622339-19-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Select legacy cache model based on vendor in CPUID 0x8000001DZhao Liu1-5/+21
As preparation for merging cache_info_cpuid4 and cache_info_amd in X86CPUState, set legacy cache model based on vendor in the CPUID 0x8000001D leaf. For AMD CPU, select legacy AMD cache model (in cache_info_amd) as the default cache model like before, otherwise, select legacy Intel cache model (in cache_info_cpuid4). In fact, for Intel (and Zhaoxin) CPU, this change is safe because the extended CPUID level supported by Intel is up to 0x80000008. So Intel Guest doesn't have this 0x8000001D leaf. Although someone could bump "xlevel" up to 0x8000001D for Intel Guest, it's meaningless and this is undefined behavior. This leaf should be considered reserved, but the SDM does not explicitly state this. So, there's no need to specifically use vendor_cpuid_only_v2 to fix anything, as it doesn't even qualify as a fix since nothing is currently broken. Therefore, it is acceptable to select the default legacy cache model based on the vendor. For the CPUID 0x8000001D leaf, in X86CPUState, a unified cache_info is enough. It only needs to be initialized and configured with the corresponding legacy cache model based on the vendor. Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711102143.1622339-18-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Select legacy cache model based on vendor in CPUID 0x80000006Zhao Liu1-5/+31
As preparation for merging cache_info_cpuid4 and cache_info_amd in X86CPUState, set legacy cache model based on vendor in the CPUID 0x80000006 leaf. For AMD CPU, select legacy AMD cache model (in cache_info_amd) as the default cache model like before, otherwise, select legacy Intel cache model (in cache_info_cpuid4). To ensure compatibility is not broken, add an enable_legacy_vendor_cache flag based on x-vendor-only-v2 to indicate cases where the legacy cache model should be used regardless of the vendor. For CPUID 0x80000006 leaf, enable_legacy_vendor_cache flag indicates to pick legacy Intel cache model, which is for compatibility with the behavior of PC machine v10.0 and older. The following explains how current vendor-based default legacy cache model ensures correctness without breaking compatibility. * For the PC machine v6.0 and older, vendor_cpuid_only=false, and vendor_cpuid_only_v2=false. - If the named CPU model has its own cache model, and doesn't use legacy cache model (legacy_cache=false), then cache_info_cpuid4 and cache_info_amd are same, so 0x80000006 leaf uses its own cache model regardless of the vendor. - For max/host/named CPU (without its own cache model), then the flag enable_legacy_vendor_cache is true, they will use legacy AMD cache model just like their previous behavior. * For the PC machine v10.0 and older (to v6.1), vendor_cpuid_only=true, and vendor_cpuid_only_v2=false. - No change, since this leaf doesn't aware vendor_cpuid_only. * For the PC machine v10.1 and newer, vendor_cpuid_only=true, and vendor_cpuid_only_v2=true. - If the named CPU model has its own cache model (legacy_cache=false), then cache_info_cpuid4 & cache_info_amd both equal to its own cache model, so it uses its own cache model in 0x80000006 leaf regardless of the vendor. Intel and Zhaoxin CPUs have their special encoding based on SDM, which is the expected behavior and no different from before. - For max/host/named CPU (without its own cache model), then the flag enable_legacy_vendor_cache is false, the legacy cache model is selected based on vendor. For AMD CPU, it will use legacy AMD cache as before. For non-AMD (Intel/Zhaoxin) CPU, it will use legacy Intel cache and be encoded based on SDM as expected. Here, selecting the legacy cache model based on the vendor does not change the previous (before the change) behavior. Therefore, the above analysis proves that, with the help of the flag enable_legacy_vendor_cache, it is acceptable to select the default legacy cache model based on the vendor. For the CPUID 0x80000006 leaf, in X86CPUState, a unified cache_info is enough. It only needs to be initialized and configured with the corresponding legacy cache model based on the vendor. Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711102143.1622339-17-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Select legacy cache model based on vendor in CPUID 0x80000005Zhao Liu1-3/+32
As preparation for merging cache_info_cpuid4 and cache_info_amd in X86CPUState, set legacy cache model based on vendor in the CPUID 0x80000005 leaf. For AMD CPU, select legacy AMD cache model (in cache_info_amd) as the default cache model like before, otherwise, select legacy Intel cache model (in cache_info_cpuid4). To ensure compatibility is not broken, add an enable_legacy_vendor_cache flag based on x-vendor-only-v2 to indicate cases where the legacy cache model should be used regardless of the vendor. For CPUID 0x80000005 leaf, enable_legacy_vendor_cache flag indicates to pick legacy AMD cache model, which is for compatibility with the behavior of PC machine v10.0 and older. The following explains how current vendor-based default legacy cache model ensures correctness without breaking compatibility. * For the PC machine v6.0 and older, vendor_cpuid_only=false, and vendor_cpuid_only_v2=false. - If the named CPU model has its own cache model, and doesn't use legacy cache model (legacy_cache=false), then cache_info_cpuid4 and cache_info_amd are same, so 0x80000005 leaf uses its own cache model regardless of the vendor. - For max/host/named CPU (without its own cache model), then the flag enable_legacy_vendor_cache is true, they will use legacy AMD cache model just like their previous behavior. * For the PC machine v10.0 and older (to v6.1), vendor_cpuid_only=true, and vendor_cpuid_only_v2=false. - No change, since this leaf doesn't aware vendor_cpuid_only. * For the PC machine v10.1 and newer, vendor_cpuid_only=true, and vendor_cpuid_only_v2=true. - If the named CPU model has its own cache model (legacy_cache=false), then cache_info_cpuid4 & cache_info_amd both equal to its own cache model, so it uses its own cache model in 0x80000005 leaf regardless of the vendor. Only Intel CPUs have all-0 leaf due to vendor_cpuid_only_2=true, and this is exactly the expected behavior. - For max/host/named CPU (without its own cache model), then the flag enable_legacy_vendor_cache is false, the legacy cache model is selected based on vendor. For AMD CPU, it will use legacy AMD cache as expected. For Intel CPU, it will use legacy Intel cache but still get all-0 leaf due to vendor_cpuid_only_2=true as expected. (Note) And for Zhaoxin CPU, it will use legacy Intel cache model instead of AMD's. This is the difference brought by this change! But it's correct since then Zhaoxin could have the consistent cache info in CPUID 0x2, 0x4 and 0x80000005 leaves. Here, except Zhaoxin, selecting the legacy cache model based on the vendor does not change the previous (before the change) behavior. And the change for Zhaoxin is also a good improvement. Therefore, the above analysis proves that, with the help of the flag enable_legacy_vendor_cache, it is acceptable to select the default legacy cache model based on the vendor. For the CPUID 0x80000005 leaf, in X86CPUState, a unified cache_info is enough. It only needs to be initialized and configured with the corresponding legacy cache model based on the vendor. Cc: EwanHai <ewanhai-oc@zhaoxin.com> Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711102143.1622339-16-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Select legacy cache model based on vendor in CPUID 0x4Zhao Liu1-9/+34
As preparation for merging cache_info_cpuid4 and cache_info_amd in X86CPUState, set legacy cache model based on vendor in the CPUID 0x4 leaf. For AMD CPU, select legacy AMD cache model (in cache_info_amd) as the default cache model, otherwise, select legacy Intel cache model (in cache_info_cpuid4) as before. To ensure compatibility is not broken, add an enable_legacy_vendor_cache flag based on x-vendor-only-v2 to indicate cases where the legacy cache model should be used regardless of the vendor. For CPUID 0x4 leaf, enable_legacy_vendor_cache flag indicates to pick legacy Intel cache model, which is for compatibility with the behavior of PC machine v10.0 and older. The following explains how current vendor-based default legacy cache model ensures correctness without breaking compatibility. * For the PC machine v6.0 and older, vendor_cpuid_only=false, and vendor_cpuid_only_v2=false. - If the named CPU model has its own cache model, and doesn't use legacy cache model (legacy_cache=false), then cache_info_cpuid4 and cache_info_amd are same, so 0x4 leaf uses its own cache model regardless of the vendor. - For max/host/named CPU (without its own cache model), then the flag enable_legacy_vendor_cache is true, they will use legacy Intel cache model just like their previous behavior. * For the PC machine v10.0 and older (to v6.1), vendor_cpuid_only=true, and vendor_cpuid_only_v2=false. - If the named CPU model has its own cache model (legacy_cache=false), then cache_info_cpuid4 & cache_info_amd both equal to its own cache model, so it uses its own cache model in 0x4 leaf regardless of the vendor. Only AMD CPUs have all-0 leaf due to vendor_cpuid_only=true, and this is exactly the behavior of these old machines. - For max/host/named CPU (without its own cache model), then the flag enable_legacy_vendor_cache is true, they will use legacy Intel cache model. Similarly, only AMD CPUs have all-0 leaf, and this is exactly the behavior of these old machines. * For the PC machine v10.1 and newer, vendor_cpuid_only=true, and vendor_cpuid_only_v2=true. - If the named CPU model has its own cache model (legacy_cache=false), then cache_info_cpuid4 & cache_info_amd both equal to its own cache model, so it uses its own cache model in 0x4 leaf regardless of the vendor. And AMD CPUs have all-0 leaf. Nothing will change. - For max/host/named CPU (without its own cache model), then the flag enable_legacy_vendor_cache is false, the legacy cache model is selected based on vendor. For AMD CPU, it will use legacy AMD cache but still get all-0 leaf due to vendor_cpuid_only=true. For non-AMD (Intel/Zhaoxin) CPU, it will use legacy Intel cache as expected. Here, selecting the legacy cache model based on the vendor does not change the previous (before the change) behavior. Therefore, the above analysis proves that, with the help of the flag enable_legacy_vendor_cache, it is acceptable to select the default legacy cache model based on the vendor. For the CPUID 0x4 leaf, in X86CPUState, a unified cache_info is enough. It only needs to be initialized and configured with the corresponding legacy cache model based on the vendor. Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711102143.1622339-15-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Select legacy cache model based on vendor in CPUID 0x2Zhao Liu2-10/+38
As preparation for merging cache_info_cpuid4 and cache_info_amd in X86CPUState, set legacy cache model based on vendor in the CPUID 0x2 leaf. For AMD CPU, select legacy AMD cache model (in cache_info_amd) as the default cache model, otherwise, select legacy Intel cache model (in cache_info_cpuid4) as before. To ensure compatibility is not broken, add an enable_legacy_vendor_cache flag based on x-vendor-only-v2 to indicate cases where the legacy cache model should be used regardless of the vendor. For CPUID 0x2 leaf, enable_legacy_vendor_cache flag indicates to pick legacy Intel cache model, which is for compatibility with the behavior of PC machine v10.0 and older. The following explains how current vendor-based default legacy cache model ensures correctness without breaking compatibility. * For the PC machine v6.0 and older, vendor_cpuid_only=false, and vendor_cpuid_only_v2=false. - If the named CPU model has its own cache model, and doesn't use legacy cache model (legacy_cache=false), then cache_info_cpuid4 and cache_info_amd are same, so 0x2 leaf uses its own cache model regardless of the vendor. - For max/host/named CPU (without its own cache model), then the flag enable_legacy_vendor_cache is true, they will use legacy Intel cache model just like their previous behavior. * For the PC machine v10.0 and older (to v6.1), vendor_cpuid_only=true, and vendor_cpuid_only_v2=false. - If the named CPU model has its own cache model (legacy_cache=false), then cache_info_cpuid4 & cache_info_amd both equal to its own cache model, so it uses its own cache model in 0x2 leaf regardless of the vendor. Only AMD CPUs have all-0 leaf due to vendor_cpuid_only=true, and this is exactly the behavior of these old machines. - For max/host/named CPU (without its own cache model), then the flag enable_legacy_vendor_cache is true, they will use legacy Intel cache model. Similarly, only AMD CPUs have all-0 leaf, and this is exactly the behavior of these old machines. * For the PC machine v10.1 and newer, vendor_cpuid_only=true, and vendor_cpuid_only_v2=true. - If the named CPU model has its own cache model (legacy_cache=false), then cache_info_cpuid4 & cache_info_amd both equal to its own cache model, so it uses its own cache model in 0x2 leaf regardless of the vendor. And AMD CPUs have all-0 leaf. Nothing will change. - For max/host/named CPU (without its own cache model), then the flag enable_legacy_vendor_cache is false, the legacy cache model is selected based on vendor. For AMD CPU, it will use legacy AMD cache but still get all-0 leaf due to vendor_cpuid_only=true. For non-AMD (Intel/Zhaoxin) CPU, it will use legacy Intel cache as expected. Here, selecting the legacy cache model based on the vendor does not change the previous (before the change) behavior. Therefore, the above analysis proves that, with the help of the flag enable_legacy_vendor_cache, it is acceptable to select the default legacy cache model based on the vendor. For the CPUID 0x2 leaf, in X86CPUState, a unified cache_info is enough. It only needs to be initialized and configured with the corresponding legacy cache model based on the vendor. Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711102143.1622339-14-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Add legacy_amd_cache_info cache modelZhao Liu1-59/+53
Based on legacy_l1d_cachei_amd, legacy_l1i_cache_amd, legacy_l2_cache_amd and legacy_l3_cache, build a complete legacy AMD cache model, which can clarify the purpose of these trivial legacy cache models, simplify the initialization of cache info in X86CPUState, and make it easier to handle compatibility later. Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711102143.1622339-13-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Add legacy_intel_cache_info cache modelZhao Liu1-47/+54
Based on legacy_l1d_cache, legacy_l1i_cache, legacy_l2_cache and legacy_l3_cache, build a complete legacy intel cache model, which can clarify the purpose of these trivial legacy cache models, simplify the initialization of cache info in X86CPUState, and make it easier to handle compatibility later. Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711102143.1622339-12-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Fix CPUID[0x80000006] for Intel CPUZhao Liu1-4/+12
Per SDM, Intel supports CPUID[0x80000006]. But only L2 information is encoded in ECX (note that L2 associativity field encodings rules consistent with AMD are used), all other fields are reserved. Therefore, make the following changes to CPUID[0x80000006]: * Check the vendor in CPUID[0x80000006] and just encode L2 to ECX for Intel. * Drop the lines_per_tag assertion, since AMD supports this field but Intel doesn't. And this field can be easily checked via cpuid tool in Guest. * Apply the encoding change of Intel for Zhaoxin as well [1]. This fix also resolves the FIXME of legacy_l2_cache_amd: /*FIXME: CPUID leaf 0x80000006 is inconsistent with leaves 2 & 4 */ In addition, per AMD's APM, update the comment of CPUID[0x80000006]. [1]: https://lore.kernel.org/qemu-devel/c522ebb5-04d5-49c6-9ad8-d755b8998988@zhaoxin.com/ Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711102143.1622339-11-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Rename AMD_ENC_ASSOC to X86_ENC_ASSOCZhao Liu1-8/+8
Rename AMD_ENC_ASSOC to X86_ENC_ASSOC since Intel also uses the same rules. While there are some slight differences between the rules in AMD APM v4.07 no.40332 and Intel. But considerring the needs of current QEMU, generally they are consistent and current AMD_ENC_ASSOC can be applied for Intel CPUs.. Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711102143.1622339-10-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Mark CPUID[0x80000005] as reserved for IntelZhao Liu1-4/+8
Per SDM, 0x80000005 leaf is reserved for Intel CPU, and its current "assert" check blocks adding new cache model for non-AMD CPUs. And please note, although Zhaoxin mostly follows Intel behavior, this leaf is an exception [1]. So, with the compat property "x-vendor-cpuid-only-v2", for the machine since v10.1, check the vendor and encode this leaf as all-0 only for Intel CPU. In addition, drop lines_per_tag assertion in encode_cache_cpuid80000005(), since Zhaoxin will use legacy Intel cache model in this leaf - which doesn't have this field. This fix also resolves 2 FIXMEs of legacy_l1d_cache_amd and legacy_l1i_cache_amd: /*FIXME: CPUID leaf 0x80000005 is inconsistent with leaves 2 & 4 */ In addition, per AMD's APM, update the comment of CPUID[0x80000005]. [1]: https://lore.kernel.org/qemu-devel/fa16f7a8-4917-4731-9d9f-7d4c10977168@zhaoxin.com/ Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711102143.1622339-9-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Add x-vendor-cpuid-only-v2 option for compatibilityZhao Liu3-1/+21
Add a compat property "x-vendor-cpuid-only-v2" (for PC machine v10.0 and older) to keep the original behavior. This property will be used to adjust vendor specific CPUID fields. Make x-vendor-cpuid-only-v2 depend on x-vendor-cpuid-only. Although x-vendor-cpuid-only and v2 should be initernal only, QEMU doesn't support "internal" property. To avoid any other unexpected issues, check the dependency. Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711102143.1622339-8-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Drop CPUID 0x2 specific cache info in X86CPUStateZhao Liu2-21/+13
With the pre-defined cache model legacy_intel_cpuid2_cache_info, for X86CPUState there's no need to cache special cache information for CPUID 0x2 leaf. Drop the cache_info_cpuid2 field of X86CPUState and use the legacy_intel_cpuid2_cache_info directly. Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711102143.1622339-7-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Consolidate CPUID 0x4 leafZhao Liu1-11/+37
Modern Intel CPUs use CPUID 0x4 leaf to describe cache information and leave space in 0x2 for prefetch and TLBs (even TLB has its own leaf CPUID 0x18). And 0x2 leaf provides a descriptor 0xFF to instruct software to check cache information in 0x4 leaf instead. Therefore, follow this behavior to encode 0xFF when Intel CPU has 0x4 leaf with "x-consistent-cache=true" for compatibility. In addition, for older CPUs without 0x4 leaf, still enumerate the cache descriptor in 0x2 leaf, except the case that there's no descriptor matching the cache model, then directly encode 0xFF in 0x2 leaf. This makes sense, as in the 0x2 leaf era, all supported caches should have the corresponding descriptor. Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711102143.1622339-6-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Present same cache model in CPUID 0x2 & 0x4Zhao Liu3-2/+16
For a long time, the default cache models used in CPUID 0x2 and 0x4 were inconsistent and had a FIXME note from Eduardo at commit 5e891bf8fd50 ("target-i386: Use #defines instead of magic numbers for CPUID cache info"): "/*FIXME: CPUID leaf 2 descriptor is inconsistent with CPUID leaf 4 */". This difference is wrong, in principle, both 0x2 and 0x4 are used for Intel's cache description. 0x2 leaf is used for ancient machines while 0x4 leaf is a subsequent addition, and both should be based on the same cache model. Furthermore, on real hardware, 0x4 leaf should be used in preference to 0x2 when it is available. Revisiting the git history, that difference occurred much earlier. Current legacy_l2_cache_cpuid2 (hardcode: "0x2c307d"), which is used for CPUID 0x2 leaf, is introduced in commit d8134d91d9b7 ("Intel cache info, by Filip Navara."). Its commit message didn't said anything, but its patch [1] mentioned the cache model chosen is "closest to the ones reported in the AMD registers". Now it is not possible to check which AMD generation this cache model is based on (unfortunately, AMD does not use 0x2 leaf), but at least it is close to the Pentium 4. In fact, the patch description of commit d8134d91d9b7 is also a bit wrong, the original cache model in leaf 2 is from Pentium Pro, and its cache descriptor had specified the cache line size ad 32 byte by default, while the updated cache model in commit d8134d91d9b7 has 64 byte line size. But after so many years, such judgments are no longer meaningful. On the other hand, for legacy_l2_cache, which is used in CPUID 0x4 leaf, is based on Intel Core Duo (patch [2]) and Core2 Duo (commit e737b32a3688 ("Core 2 Duo specification (Alexander Graf).") The patches of Core Duo and Core 2 Duo add the cache model for CPUID 0x4, but did not update CPUID 0x2 encoding. This is the reason that Intel Guests use two cache models in 0x2 and 0x4 all the time. Of course, while no Core Duo or Core 2 Duo machines have been found for double checking, this still makes no sense to encode different cache models on a single machine. Referring to the SDM and the real hardware available, 0x2 leaf can be directly encoded 0xFF to instruct software to go to 0x4 leaf to get the cache information, when 0x4 is available. Therefore, it's time to clean up Intel's default cache models. As the first step, add "x-consistent-cache" compat option to allow newer machines (v10.1 and newer) to have the consistent cache model in CPUID 0x2 and 0x4 leaves. This doesn't affect the CPU models with CPUID level < 4 ("486", "pentium", "pentium2" and "pentium3"), because they have already had the special default cache model - legacy_intel_cpuid2_cache_info. [1]: https://lore.kernel.org/qemu-devel/5b31733c0709081227w3e5f1036odbc649edfdc8c79b@mail.gmail.com/ [2]: https://lore.kernel.org/qemu-devel/478B65C8.2080602@csgraf.de/ Cc: Alexander Graf <agraf@csgraf.de> Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711102143.1622339-5-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Add default cache model for Intel CPUs with level < 4Zhao Liu1-0/+65
Old Intel CPUs with CPUID level < 4, use CPUID 0x2 leaf (if available) to encode cache information. Introduce a cache model "legacy_intel_cpuid2_cache_info" for the CPUs with CPUID level < 4, based on legacy_l1d_cache, legacy_l1i_cache, legacy_l2_cache_cpuid2 and legacy_l3_cache. But for L2 cache, this cache model completes self_init, sets, partitions, no_invd_sharing and share_level fields, referring legacy_l2_cache, to avoid someone increases CPUID level manually and meets assert() error. But the cache information present in CPUID 0x2 leaf doesn't change. This new cache model makes it possible to remove legacy_l2_cache_cpuid2 in X86CPUState and help to clarify historical cache inconsistency issue. Furthermore, apply this legacy cache model to all Intel CPUs with CPUID level < 4. This includes not only "pentium2" and "pentium3" (which have 0x2 leaf), but also "486" and "pentium" (which only have 0x1 leaf, and cache model won't be presented, just for simplicity). A legacy_intel_cpuid2_cache_info cache model doesn't change the cache information of the above CPUs, because they just depend on 0x2 leaf. Only when someone adjusts the min-level to >=4 will the cache information in CPUID leaf 4 differ from before: previously, the L2 cache information in CPUID leaf 0x2 and 0x4 was different, but now with legacy_intel_cpuid2_cache_info, the information they present will be consistent. This case almost never happens, emulating a CPUID that is not supported by the "ancient" hardware is itself meaningless behavior. Therefore, even though there's the above difference (for really rare case) and considering these old CPUs ("486", "pentium", "pentium2" and "pentium3") won't be used for migration, there's no need to add new versioned CPU models Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711102143.1622339-4-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Add descriptor 0x49 for CPUID 0x2 encodingZhao Liu1-1/+12
The legacy_l2_cache (2nd-level cache: 4 MByte, 16-way set associative, 64 byte line size) corresponds to descriptor 0x49, but at present cpuid2_cache_descriptors doesn't support descriptor 0x49 because it has multiple meanings. The 0x49 is necessary when CPUID 0x2 and 0x4 leaves have the consistent cache model, and use legacy_l2_cache as the default L2 cache. Therefore, add descriptor 0x49 to represent general L2 cache. Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711102143.1622339-3-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Refine comment of CPUID2CacheDescriptorInfoZhao Liu1-9/+22
Refer to SDM vol.3 table 1-21, add the notes about the missing descriptor, and fix the typo and comment format. Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711102143.1622339-2-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/tdx: Don't mask off CPUID_EXT_PDCMXiaoyao Li1-1/+3
It gets below warning when booting TDX VMs: warning: TDX forcibly sets the feature: CPUID[eax=01h].ECX.pdcm [bit 15] Because CPUID_EXT_PDCM is fixed1 for TDX, and MSR_IA32_PERF_CAPABILITIES is supported for TDX guest unconditioanlly. Don't mask off CPUID_EXT_PDCM for TDX. Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com> Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250625035710.2770679-1-xiaoyao.li@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/tdx: Remove task->watch only when it's validXiaoyao Li1-1/+3
In some case (e.g., failed to connect to QGS socket), tdx_generate_quote_cleanup() is called with task->watch invalid. It triggers assertion of qemu-system-x86_64: GLib: g_source_remove: assertion 'tag > 0' failed Fix it by checking task->watch. Fixes: 40da501d8989 ("i386/tdx: handle TDG.VP.VMCALL<GetQuote>") Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com> Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250625035505.2770580-1-xiaoyao.li@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Unify family, model and stepping calculation for x86 CPUXiaoyao Li4-12/+38
There are multiple places where CPUID family/model/stepping info are retrieved from env->cpuid_version. Besides, the calculation of family and model inside host_cpu_vendor_fms() doesn't comply to what Intel and AMD define. For family, both Intel and AMD define that Extended Family ID needs to be counted only when (base) Family is 0xF. For model, Intel counts Extended Model when (base) Family is 0x6 or 0xF, while AMD counts EXtended MOdel when (base) Family is 0xF. Introduce generic helper functions to get family, model and stepping from the EAX value of CPUID leaf 1, with the correct calculation formula. Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com> Link: https://lore.kernel.org/r/20250630080610.3151956-5-xiaoyao.li@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/kvm-cpu: Fix the indentation inside kvm_cpu_realizefn()Xiaoyao Li1-1/+1
The indentation of one of the } inside kvm_cpu_realizefn() isn'f correct. fix it. Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com> Link: https://lore.kernel.org/r/20250630080610.3151956-4-xiaoyao.li@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386: Cleanup the usage of CPUID_VENDOR_INTEL_1Xiaoyao Li2-3/+3
There are code using "env->cpuid_vendor1 == CPUID_VENDOR_INTEL_1" to check if it is Intel vcpu. Cleanup them to just use IS_INTEL_CPU() Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com> Link: https://lore.kernel.org/r/20250630080610.3151956-3-xiaoyao.li@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Use CPUID_MODEL_ID_SZ instead of hardcoded 48Xiaoyao Li3-6/+6
There is already the MACRO CPUID_MODEL_ID_SZ defined in QEMU. Use it to replace all the hardcoded 48. Opportunistically fix the indentation of CPUID_VENDOR_SZ. Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Link: https://lore.kernel.org/r/20250630080610.3151956-2-xiaoyao.li@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Move the implementation of is_host_cpu_intel() host-cpu.cXiaoyao Li4-10/+10
It's more proper to put is_host_cpu_intel() in host-cpu.c instead of vmsr_energy.c. Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com> Link: https://lore.kernel.org/r/20250701075738.3451873-3-xiaoyao.li@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12sev: Provide sev_features flags from IGVM VMSA to KVM_SEV_INIT2Roy Hopkins6-26/+163
IGVM files can contain an initial VMSA that should be applied to each vcpu as part of the initial guest state. The sev_features flags are provided as part of the VMSA structure. However, KVM only allows sev_features to be set during initialization and not as the guest is being prepared for launch. This patch queries KVM for the supported set of sev_features flags and processes the VP context entries in the IGVM file during kvm_init to determine any sev_features flags set in the IGVM file. These are then provided in the call to KVM_SEV_INIT2 to ensure the guest state matches that specified in the IGVM file. The igvm process() function is modified to allow a partial processing of the file during initialization, with only the IGVM_VHT_VP_CONTEXT fields being processed. This means the function is called twice, firstly to extract the sev_features then secondly to actually configure the guest. Signed-off-by: Roy Hopkins <roy.hopkins@randomman.co.uk> Acked-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Stefano Garzarella <sgarzare@redhat.com> Acked-by: Gerd Hoffman <kraxel@redhat.com> Tested-by: Stefano Garzarella <sgarzare@redhat.com> Reviewed-by: Liam Merwick <liam.merwick@oracle.com> Reviewed-by: Ani Sinha <anisinha@redhat.com> Link: https://lore.kernel.org/r/b2f986aae04e1da2aee530c9be22a54c0c59a560.1751554099.git.roy.hopkins@randomman.co.uk Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/sev: Add implementation of CGS set_guest_policy()Roy Hopkins2-0/+95
The new cgs_set_guest_policy() function is provided to receive the guest policy flags, SNP ID block and SNP ID authentication from guest configuration such as an IGVM file and apply it to the platform prior to launching the guest. The policy is used to populate values for the existing 'policy', 'id_block' and 'id_auth' parameters. When provided, the guest policy is applied and the ID block configuration is used to verify the launch measurement and signatures. The guest is only successfully started if the expected launch measurements match the actual measurements and the signatures are valid. Signed-off-by: Roy Hopkins <roy.hopkins@randomman.co.uk> Acked-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Stefano Garzarella <sgarzare@redhat.com> Acked-by: Gerd Hoffman <kraxel@redhat.com> Reviewed-by: Ani Sinha <anisinha@redhat.com> Link: https://lore.kernel.org/r/99e82ddec4ad2970c790db8bea16ea3f57eb0e53.1751554099.git.roy.hopkins@randomman.co.uk Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12backends/igvm: Handle policy for SEV guestsRoy Hopkins1-0/+149
Adds a handler for the guest policy initialization IGVM section and builds an SEV policy based on this information and the ID block directive if present. The policy is applied using by calling 'set_guest_policy()' on the ConfidentialGuestSupport object. Signed-off-by: Roy Hopkins <roy.hopkins@randomman.co.uk> Acked-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Stefano Garzarella <sgarzare@redhat.com> Acked-by: Gerd Hoffman <kraxel@redhat.com> Reviewed-by: Ani Sinha <anisinha@redhat.com> Link: https://lore.kernel.org/r/57707230bef331b53e9366ce6a23ed25cd6f1293.1751554099.git.roy.hopkins@randomman.co.uk Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12backends/igvm: Process initialization sections in IGVM fileRoy Hopkins1-0/+21
The initialization sections in IGVM files contain configuration that should be applied to the guest platform before it is started. This includes guest policy and other information that can affect the security level and the startup measurement of a guest. This commit introduces handling of the initialization sections during processing of the IGVM file. Signed-off-by: Roy Hopkins <roy.hopkins@randomman.co.uk> Acked-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Gerd Hoffman <kraxel@redhat.com> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Link: https://lore.kernel.org/r/9de24fb5df402024b40cbe02de0b13faa7cb4d84.1751554099.git.roy.hopkins@randomman.co.uk Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12backends/confidential-guest-support: Add set_guest_policy() functionRoy Hopkins2-0/+33
For confidential guests a policy can be provided that defines the security level, debug status, expected launch measurement and other parameters that define the configuration of the confidential platform. This commit adds a new function named set_guest_policy() that can be implemented by each confidential platform, such as AMD SEV to set the policy. This will allow configuration of the policy from a multi-platform resource such as an IGVM file without the IGVM processor requiring specific implementation details for each platform. Signed-off-by: Roy Hopkins <roy.hopkins@randomman.co.uk> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Reviewed-by: Ani Sinha <anisinha@redhat.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Gerd Hoffman <kraxel@redhat.com> Link: https://lore.kernel.org/r/d3888a2eb170c8d8c85a1c4b7e99accf3a15589c.1751554099.git.roy.hopkins@randomman.co.uk Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12docs/interop/firmware.json: Add igvm to FirmwareDeviceRoy Hopkins1-2/+28
Create an enum entry within FirmwareDevice for 'igvm' to describe that an IGVM file can be used to map firmware into memory as an alternative to pre-existing firmware devices. Signed-off-by: Roy Hopkins <roy.hopkins@randomman.co.uk> Acked-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Gerd Hoffman <kraxel@redhat.com> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Reviewed-by: Ani Sinha <anisinha@redhat.com> Link: https://lore.kernel.org/r/2eca2611d372facbffa65ee8244cf2d321eb9d17.1751554099.git.roy.hopkins@randomman.co.uk Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>