summary refs log tree commit diff stats
path: root/scripts/qapi/source.py (unfollow)
Commit message (Collapse)AuthorFilesLines
2025-07-16fsdev/9p-marshal: move G_GNUC_PRINTF to headerSean Wei2-3/+3
v9fs_string_sprintf() is annotated with G_GNUC_PRINTF(2, 3) in 9p-marshal.c, but the prototype in fsdev/9p-marshal.h is missing the attribute, so callers that include only the header do not get format checking. Move the annotation to the header and delete the duplicate in the source file. No behavior change. Signed-off-by: Sean Wei <me@sean.taipei> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-Id: <20250613.qemu.9p.01@sean.taipei> [CS: fix code style (max. 80 chars per line)] Signed-off-by: Christian Schoenebeck <qemu_oss@crudebyte.com>
2025-07-14gdbstub: add the GDB register XML files for sparc64.Rot1274-0/+102
Signed-off-by: Rot127 <unisono@quyllur.org> Message-ID: <20250711155141.62916-2-unisono@quyllur.org> [AJB: clean up commit msg] Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
2025-07-14docs/system: clean-up formatting of virtio-net-failoverAlex Bennée1-23/+28
We didn't clean-up the rst formatting when we moved this into the docs so lets do that now: - un-indent the usage/hotplug/migration paragraphs - properly wrap the command line fragments in code-block - highlight parameters in text with ``double quotes`` No changes to the actual text. Reviewed-by: Manos Pitsidianakis <manos.pitsidianakis@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-ID: <20250710104531.3099313-8-alex.bennee@linaro.org>
2025-07-14docs: use :kbd: role in sphinx docsManos Pitsidianakis5-44/+51
Sphinx supports the :kbd: role for notating keyboard input. They get formatted as <kbd> HTML elements in the readthedocs theme we currently use for Sphinx. Besides the better visual formatting, it also helps with accessibility as screen readers can announce the semantics of the <kbd> element to the user. Signed-off-by: Manos Pitsidianakis <manos.pitsidianakis@linaro.org> Message-ID: <20250709-docs_rst_improvements-v2-1-cb5096ad0022@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-ID: <20250710104531.3099313-7-alex.bennee@linaro.org>
2025-07-14plugins: fix inclusion of user-mode APIsAlex Bennée3-1/+6
In 903e870f24 (plugins/api: split out binary path/start/end/entry code) we didn't actually enable the building of the new plugin helper. However this was missed because only contrib plugins like drcov actually used the helpers. With that fixed we discover we also need some more includes to be able to extract the relevant data from TaskState. Fixes: 903e870f24 (plugins/api: split out binary path/start/end/entry code) Resolves: https://gitlab.com/qemu-project/qemu/-/issues/3014 Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-ID: <20250710104531.3099313-6-alex.bennee@linaro.org>
2025-07-14target/alpha: Add GDB XML feature fileYodel Eldar4-0/+139
This patch adds the GDB XML feature file that describes Alpha's core registers. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2569 Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Yodel Eldar <yodel.eldar@gmail.com> Message-ID: <20250630164124.26315-3-yodel.eldar@gmail.com> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-ID: <20250710104531.3099313-5-alex.bennee@linaro.org>
2025-07-14contrib/plugins/execlog: Add tab to the separator search of insn_disasYodel Eldar1-6/+9
Currently, execlog searches for a space separator between the instruction mnemonic and operands, but some disassemblers, e.g. Alpha's, use a tab separator instead; this results in a null pointer being passed as the haystack in g_strstr during a subsequent register search, i.e. undefined behavior, because of a missing null check. This patch adds tab to the separator search and a null check on the result. Also, an affected pointer is changed to const. Lastly, a break statement was added to immediately terminate the register search when a user-requested register is found in the current instruction as a trivial optimization, because searching for the remaining requested registers is unnecessary once one is found. Suggested-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Yodel Eldar <yodel.eldar@gmail.com> Message-ID: <20250630164124.26315-2-yodel.eldar@gmail.com> Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-ID: <20250710104531.3099313-4-alex.bennee@linaro.org>
2025-07-14gitlab: add -n option to check-units scriptAlex Bennée1-3/+5
Mostly a developer aid for those who want to look at the full backlog of multiple build units. Reviewed-by: Manos Pitsidianakis <manos.pitsidianakis@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-ID: <20250710104531.3099313-3-alex.bennee@linaro.org>
2025-07-14gitlab: use argparse in check-units scriptAlex Bennée1-9/+12
Modernise the argument parsing so we can easily add to the script. Reviewed-by: Manos Pitsidianakis <manos.pitsidianakis@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-ID: <20250710104531.3099313-2-alex.bennee@linaro.org>
2025-07-14i386/cpu: Honor maximum value for CPUID.8000001DH.EAX[25:14]Zhao Liu1-1/+2
CPUID.8000001DH:EAX[25:14] is "NumSharingCache", and the number of logical processors sharing this cache is the value of this field incremented by 1. Because of its width limitation, the maximum value currently supported is 4095. Though at present Q35 supports up to 4096 CPUs, by constructing a specific topology, the width of the APIC ID can be extended beyond 12 bits. For example, using `-smp threads=33,cores=9,modules=9` results in a die level offset of 6 + 4 + 4 = 14 bits, which can also cause overflow. Check and honor the maximum value as CPUID.04H did. Cc: Babu Moger <babu.moger@amd.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250714080859.1960104-8-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-14i386/cpu: Fix overflow of cache topology fields in CPUID.04HQian Wen1-5/+11
According to SDM, CPUID.0x4:EAX[31:26] indicates the Maximum number of addressable IDs for processor cores in the physical package. If we launch over 64 cores VM, the 6-bit field will overflow, and the wrong core_id number will be reported. Since the HW reports 0x3f when the intel processor has over 64 cores, limit the max value written to EAX[31:26] to 63, so max num_cores should be 64. For EAX[14:25], though at present Q35 supports up to 4096 CPUs, by constructing a specific topology, the width of the APIC ID can be extended beyond 12 bits. For example, using `-smp threads=33,cores=9, modules=9` results in a die level offset of 6 + 4 + 4 = 14 bits, which can also cause overflow. check and honor the maximum value for EAX[14:25] as well. In addition, for host-cache-info case, also apply the same checks and fixes. Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com> Signed-off-by: Qian Wen <qian.wen@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250714080859.1960104-7-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-14i386/cpu: Fix cpu number overflow in CPUID.01H.EBX[23:16]Qian Wen1-2/+7
The legacy topology enumerated by CPUID.1.EBX[23:16] is defined in SDM Vol2: Bits 23-16: Maximum number of addressable IDs for logical processors in this physical package. When threads_per_socket > 255, it will 1) overwrite bits[31:24] which is apic_id, 2) bits [23:16] get truncated. Specifically, if launching the VM with -smp 256, the value written to EBX[23:16] is 0 because of data overflow. If the guest only supports legacy topology, without V2 Extended Topology enumerated by CPUID.0x1f or Extended Topology enumerated by CPUID.0x0b to support over 255 CPUs, the return of the kernel invoking cpu_smt_allowed() is false and APs (application processors) will fail to bring up. Then only CPU 0 is online, and others are offline. For example, launch VM via: qemu-system-x86_64 -M q35,accel=kvm,kernel-irqchip=split \ -cpu qemu64,cpuid-0xb=off -smp 256 -m 32G \ -drive file=guest.img,if=none,id=virtio-disk0,format=raw \ -device virtio-blk-pci,drive=virtio-disk0,bootindex=1 --nographic The guest shows: CPU(s): 256 On-line CPU(s) list: 0 Off-line CPU(s) list: 1-255 To avoid this issue caused by overflow, limit the max value written to EBX[23:16] to 255 as the HW does. Cc: qemu-stable@nongnu.org Reviewed-by: Xiaoyao Li <xiaoyao.li@intel.com> Signed-off-by: Qian Wen <qian.wen@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250714080859.1960104-6-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-14i386/cpu: Fix number of addressable IDs field for CPUID.01H.EBX[23:16]Chuang Xu1-1/+11
When QEMU is started with: -cpu host,migratable=on,host-cache-info=on,l3-cache=off -smp 180,sockets=2,dies=1,cores=45,threads=2 On Intel platform: CPUID.01H.EBX[23:16] is defined as "max number of addressable IDs for logical processors in the physical package". When executing "cpuid -1 -l 1 -r" in the guest, we obtain a value of 90 for CPUID.01H.EBX[23:16], whereas the expected value is 128. Additionally, executing "cpuid -1 -l 4 -r" in the guest yields a value of 63 for CPUID.04H.EAX[31:26], which matches the expected result. As (1+CPUID.04H.EAX[31:26]) rounds up to the nearest power-of-2 integer, it's necessary to round up CPUID.01H.EBX[23:16] to the nearest power-of-2 integer too. Otherwise there would be unexpected results in guest with older kernel. For example, when QEMU is started with CLI above and xtopology is disabled, guest kernel 5.15.120 uses CPUID.01H.EBX[23:16]/(1+CPUID.04H.EAX[31:26]) to calculate threads-per-core in detect_ht(). Then guest will get "90/(1+63)=1" as the result, even though threads-per-core should actually be 2. And on AMD platform: CPUID.01H.EBX[23:16] is defined as "Logical processor count". Current result meets our expectation. So round up CPUID.01H.EBX[23:16] to the nearest power-of-2 integer only for Intel platform to solve the unexpected result. Use the "x-vendor-cpuid-only-v2" compat option to fix this issue. Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Signed-off-by: Guixiong Wei <weiguixiong@bytedance.com> Signed-off-by: Yipeng Yin <yinyipeng@bytedance.com> Signed-off-by: Chuang Xu <xuchuangxclwt@bytedance.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250714080859.1960104-5-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-14i386/cpu: Reorder CPUID leaves in cpu_x86_cpuid()Zhao Liu1-30/+30
Sort the CPUID leaves strictly by index to facilitate checking and changing. Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Reviewed-by: Tao Su <tao1.su@linux.intel.com> Link: https://lore.kernel.org/r/20250627035129.2755537-5-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-14tests/vm: bump FreeBSD image to 14.3Paolo Bonzini1-2/+2
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-14tests/functional: test_x86_cpu_model_versions: remove dead testsPaolo Bonzini1-98/+12
Tests that require machines older than 4.2 are now unconditionally skipped. Remove them if they test legacy behavior, or use the latest machine if they test current behavior. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-14i386/cpu: Mark CPUID 0x80000008 ECX bits[0:7] & [12:15] as reserved for ↵Zhao Liu1-0/+11
Intel/Zhaoxin Per SDM, 80000008H EAX Linear/Physical Address size. Bits 07-00: #Physical Address Bits*. Bits 15-08: #Linear Address Bits. Bits 31-16: Reserved = 0. EBX Bits 08-00: Reserved = 0. Bit 09: WBNOINVD is available if 1. Bits 31-10: Reserved = 0. ECX Reserved = 0. EDX Reserved = 0. ECX/EDX in CPUID 0x80000008 leaf are reserved. Currently, in QEMU, only ECX bits[0:7] and ECX bits[12:15] are encoded, and both are emulated in QEMU. Considering that Intel and Zhaoxin are already using the 0x1f leaf to describe CPU topology, which includes similar information, Intel and Zhaoxin will not implement ECX bits[0:7] and bits[12:15] of 0x80000008. Therefore, mark these two fields as reserved and clear them for Intel and Zhaoxin guests. Reviewed-by: Tao Su <tao1.su@linux.intel.com> Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250714080859.1960104-3-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-14i386/cpu: Mark CPUID 0x80000007[EBX] as reserved for IntelZhao Liu1-1/+5
Per SDM, 80000007H EAX Reserved = 0. EBX Reserved = 0. ECX Reserved = 0. EDX Bits 07-00: Reserved = 0. Bit 08: Invariant TSC available if 1. Bits 31-09: Reserved = 0. EAX/EBX/ECX in CPUID 0x80000007 leaf are reserved for Intel. At present, EAX is reserved for AMD, too. And AMD hasn't used ECX in QEMU. So these 2 registers are both left as 0. Therefore, only fix the EBX and excode it as 0 for Intel. Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Reviewed-by: Tao Su <tao1.su@linux.intel.com> Link: https://lore.kernel.org/r/20250627035129.2755537-3-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-14i386/cpu: Mark EBX/ECX/EDX in CPUID 0x80000000 leaf as reserved for IntelZhao Liu1-3/+9
Per SDM, 80000000H EAX Maximum Input Value for Extended Function CPUID Information. EBX Reserved. ECX Reserved. EDX Reserved. EBX/ECX/EDX in CPUID 0x80000000 leaf are reserved. Intel is using 0x0 leaf to encode vendor. Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Reviewed-by: Tao Su <tao1.su@linux.intel.com> Link: https://lore.kernel.org/r/20250627035129.2755537-2-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-14net/passt: Implement vhost-user backend supportLaurent Vivier4-2/+369
This commit adds support for the vhost-user interface to the passt network backend, enabling high-performance, accelerated networking for guests using passt. The passt backend can now operate in a vhost-user mode, where it communicates with the guest's virtio-net device over a socket pair using the vhost-user protocol. This offloads the datapath from the main QEMU loop, significantly improving network performance. When the vhost-user=on option is used with -netdev passt, the new vhost initialization path is taken instead of the standard stream-based connection. Signed-off-by: Laurent Vivier <lvivier@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
2025-07-14net: Add passt network backendLaurent Vivier12-4/+731
This commit introduces support for passt as a new network backend. passt is an unprivileged, user-mode networking solution that provides connectivity for virtual machines by launching an external helper process. The implementation reuses the generic stream data handling logic. It launches the passt binary using GSubprocess, passing it a file descriptor from a socketpair() for communication. QEMU connects to the other end of the socket pair to establish the network data stream. The PID of the passt daemon is tracked via a temporary file to ensure it is terminated when QEMU exits. Signed-off-by: Laurent Vivier <lvivier@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
2025-07-14net: Add is_vhost_user flag to vhost_net structLaurent Vivier7-3/+13
Introduce a boolean is_vhost_user field to the vhost_net structure. This flag is initialized during vhost_net_init based on whether the backend is vhost-user. This refactoring simplifies checks for vhost-user specific behavior, replacing direct comparisons of 'net->nc->info->type' with the new flag. It improves readability and encapsulates the backend type information directly within the vhost_net instance. Signed-off-by: Laurent Vivier <lvivier@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
2025-07-14net: Allow network backends to advertise max TX queue sizeLaurent Vivier7-12/+18
This commit refactors how the maximum transmit queue size for virtio-net devices is determined, making the mechanism more generic and extensible. Previously, virtio_net_max_tx_queue_size() contained hardcoded checks for specific network backend types (vhost-user and vhost-vdpa) to determine their supported maximum queue size. This created direct dependencies and would require modifications for every new backend that supports variable queue sizes. To improve flexibility, a new max_tx_queue_size field is added to the vhost_net structure. This allows each network backend to advertise its supported maximum transmit queue size directly. The virtio_net_max_tx_queue_size() function now retrieves the max TX queue size from the vhost_net struct, if available and set. Otherwise, it defaults to VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE. Signed-off-by: Laurent Vivier <lvivier@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
2025-07-14net: Add save_acked_features callback to vhost_netLaurent Vivier9-25/+13
This commit introduces a save_acked_features function pointer to vhost_net and converts the vhost_net function into a generic dispatcher. The vhost-user backend provides the callback, making its function static. With this change, no other module has a direct dependency on the vhost-user implementation. This cleanup allows for the complete removal of the net/vhost-user.h header file. Signed-off-by: Laurent Vivier <lvivier@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
2025-07-14net: Add get_acked_features callback to VhostNetOptionsLaurent Vivier6-7/+10
This patch continues the effort to decouple the generic vhost layer from specific network backend implementations. Previously, the vhost_net initialization code contained a hardcoded check for the vhost-user client type to retrieve its acked features by calling vhost_user_get_acked_features(). This exposed an internal vhost-user function in a public header and coupled the two modules. The vhost-user backend is updated to provide a callback, and its getter function is now static. The call site in vhost_net.c is simplified to use the new generic helper, removing the type check and the direct dependency. Signed-off-by: Laurent Vivier <lvivier@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
2025-07-14net: Consolidate vhost feature bits into vhost_net structureLaurent Vivier7-90/+69
Previously, the vhost_net_get_feature_bits() function in hw/net/vhost_net.c used a large switch statement to determine the appropriate feature bits based on the NetClientDriver type. This created unnecessary coupling between the generic vhost layer and specific network backends (like TAP, vhost-user, and vhost-vdpa). This patch moves the definition of vhost feature bits directly into the vhost_net structure for each relevant network client. Signed-off-by: Laurent Vivier <lvivier@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
2025-07-14net: Add get_vhost_net callback to NetClientInfoLaurent Vivier9-47/+24
The get_vhost_net() function previously contained a large switch statement to find the VHostNetState pointer based on the net client's type. This created a tight coupling, requiring the generic vhost layer to be aware of every specific backend that supported vhost, such as tap, vhost-user, and vhost-vdpa. This approach is not scalable and requires modifying a central function for any new backend. It also forced each backend to expose its internal getter function in a public header file. This patch refactors the logic by introducing a new get_vhost_net function pointer to the NetClientInfo struct. The central get_vhost_net() function is now a simple, generic dispatcher that invokes the callback provided by the net client. Each backend now implements its own private getter and registers it in its NetClientInfo. Signed-off-by: Laurent Vivier <lvivier@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
2025-07-14vhost_net: Rename vhost_set_vring_enable() for clarityLaurent Vivier4-6/+6
This is a cosmetic change with no functional impact. The function vhost_set_vring_enable() is specific to vhost_net and is used outside of vhost_net.c (specifically, in hw/net/virtio-net.c). To prevent confusion with other similarly named vhost functions, such as the one found in cryptodev-vhost.c, it has been renamed to vhost_net_set_vring_enable(). This clarifies that the function belongs to the vhost_net module. Signed-off-by: Laurent Vivier <lvivier@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
2025-07-14net: Define net_client_set_link()Laurent Vivier3-14/+23
The code to set the link status is currently located in qmp_set_link(). This function identifies the device by name, searches for the corresponding NetClientState, and then updates the link status. In some parts of the code, such as vhost-user.c, the NetClientState are already available. Calling qmp_set_link() from these locations leads to a redundant search for the clients. This patch refactors the logic by introducing a new function, net_client_set_link(), which accepts a NetClientState array directly. qmp_set_link() is simplified to be a wrapper that performs the client search and then calls the new function. The vhost-user implementation is updated to use net_client_set_link() directly, thereby eliminating the unnecessary client lookup. Signed-off-by: Laurent Vivier <lvivier@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
2025-07-14net: Refactor stream logic for reuse in '-net passt'Laurent Vivier4-219/+290
To prepare for the implementation of '-net passt', this patch moves the generic stream handling functions from net/stream.c into new net/stream_data.c and net/stream_data.h files. This refactoring introduces a NetStreamData struct that encapsulates the generic fields and logic previously in NetStreamState. The NetStreamState now embeds NetStreamData and delegates the core stream operations to the new generic functions. To maintain flexibility for different users of this generic code, callbacks for send and listen operations are now passed via function pointers within the NetStreamData struct. This allows callers to provide their own specific implementations while reusing the common connection and data transfer logic. Signed-off-by: Laurent Vivier <lvivier@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
2025-07-14virtio-net: Add queues for RSS during migrationAkihiko Odaki3-16/+19
virtio_net_pre_load_queues() inspects vdev->guest_features to tell if VIRTIO_NET_F_RSS or VIRTIO_NET_F_MQ is enabled to infer the required number of queues. This works for VIRTIO_NET_F_MQ but it doesn't for VIRTIO_NET_F_RSS because only the lowest 32 bits of vdev->guest_features is set at the point and VIRTIO_NET_F_RSS uses bit 60 while VIRTIO_NET_F_MQ uses bit 22. Instead of inferring the required number of queues from vdev->guest_features, use the number loaded from the vm state. This change also has a nice side effect to remove a duplicate peer queue pair change by circumventing virtio_net_set_multiqueue(). Also update the comment in include/hw/virtio/virtio.h to prevent an implementation of pre_load_queues() from refering to any fields being loaded during migration by accident in the future. Fixes: 8c49756825da ("virtio-net: Add only one queue pair when realizing") Tested-by: Lei Yang <leiyang@redhat.com> Cc: qemu-stable@nongnu.org Signed-off-by: Akihiko Odaki <akihiko.odaki@daynix.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
2025-07-14net: fix buffer overflow in af_xdp_umem_create()Anastasia Belova1-1/+1
s->pool has n_descs elements so maximum i should be n_descs - 1. Fix the upper bound. Found by Linux Verification Center (linuxtesting.org) with SVACE. Fixes: cb039ef3d9 ("net: add initial support for AF_XDP network backend") Cc: qemu-stable@nongnu.org Reviewed-by: Ilya Maximets <i.maximets@ovn.org> Signed-off-by: Anastasia Belova <nabelova31@gmail.com> Signed-off-by: Jason Wang <jasowang@redhat.com>
2025-07-13docs/devel/tracing: Update trace.h creation rune to include SPDXPeter Maydell1-1/+1
checkpatch now checks that new files have an SPDX line. If you use the shell rune in tracing.rst to create a trace.h wrapper header, this triggers checkpatch to complain. Although these files are tiny, it's worth having the SPDX line to avoid having to add extra exception cases to checkpatch. Update the rune to include creating an SPDX line. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Michael Tokarev <mjt@tls.msk.ru> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2025-07-13hw/uefi: Create and use trace.h wrapper headerPeter Maydell5-4/+6
The documentation of the trace subsystem (docs/devel/tracing.rst) says that each subdirectory which uses trace events should create a wrapper trace.h file which includes the trace/trace-foo.h generated header, and that .c files then #include "trace.h". We didn't follow this pattern in hw/uefi/. Correct this by creating and using the trace.h wrapper header. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Michael Tokarev <mjt@tls.msk.ru> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2025-07-13docs/system/target-i386: Remove the sentence about RHEL 7 being supportedThomas Huth1-3/+1
According to our "Supported build platforms" policy, RHEL 7 is not supported anymore, so let's remove the related sentence from the x86 documentation. Signed-off-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Zhao Liu <zhao1.liu@intel.com> Reviewed-by: Michael Tokarev <mjt@tls.msk.ru> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2025-07-13accel/kvm: Adjust the note about the minimum required kernel versionThomas Huth1-2/+1
Since commit 126e7f78036 ("kvm: require KVM_CAP_IOEVENTFD and KVM_CAP_IOEVENTFD_ANY_LENGTH") we require at least kernel 4.5 to be able to use KVM. Adjust the upgrade_note accordingly. While we're at it, remove the text about kvm-kmod and the SourceForge URL since this is not actively maintained anymore. Fixes: 126e7f78036 ("kvm: require KVM_CAP_IOEVENTFD and KVM_CAP_IOEVENTFD_ANY_LENGTH") Signed-off-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Michael Tokarev <mjt@tls.msk.ru> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2025-07-13docs: remove repeated wordAndrew Kreimer1-1/+1
The word 'find' appears twice, remove the extra one. Signed-off-by: Andrew Kreimer <algonell@gmail.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Michael Tokarev <mjt@tls.msk.ru> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2025-07-13hw/usb/dev-hid: Support side and extra mouse buttons for usb-tabletThomas Lambertz1-3/+3
The necessary plumbing for side- and extra mouse buttons to reach usb-tablet is already done. But the descriptor advertises three buttons max. Increase this to 5. Buttons are now identical to usb-mouse. Signed-off-by: Thomas Lambertz <patch@thomaslambertz.de> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2025-07-12i386/cpu: Enable 0x1f leaf for YongFeng by defaultZhao Liu1-1/+5
Host YongFeng CPU has 0x1f leaf by default, so that enable it for Guest CPU by default as well. Suggested-by: Ewan Hai <ewanhai-oc@zhaoxin.com> Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711104603.1634832-10-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Enable 0x1f leaf for SapphireRapids by defaultZhao Liu1-1/+5
Host SapphireRapids CPU has 0x1f leaf by default, so that enable it for Guest CPU by default as well. Suggested-by: Igor Mammedov <imammedo@redhat.com> Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711104603.1634832-9-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Enable 0x1f leaf for GraniteRapids by defaultZhao Liu1-1/+5
Host GraniteRapids CPU has 0x1f leaf by default, so that enable it for Guest CPU by default as well. Suggested-by: Igor Mammedov <imammedo@redhat.com> Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711104603.1634832-8-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Enable 0x1f leaf for SierraForest by defaultZhao Liu1-0/+1
Host SierraForest CPU has 0x1f leaf by default, so that enable it for Guest CPU by default as well. Suggested-by: Igor Mammedov <imammedo@redhat.com> Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711104603.1634832-7-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Enable 0x1f leaf for SierraForest by defaultZhao Liu1-1/+4
Host SierraForest CPU has 0x1f leaf by default, so that enable it for Guest CPU by default as well. Suggested-by: Igor Mammedov <imammedo@redhat.com> Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711104603.1634832-7-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Add a "x-force-cpuid-0x1f" propertyManish Mishra1-0/+1
Add a "x-force-cpuid-0x1f" property so that CPU models can enable it and have 0x1f CPUID leaf natually as the Host CPU. The advantage is that when the CPU model's cache model is already consistent with the Host CPU, for example, SRF defaults to l2 per module & l3 per package, 0x1f can better help users identify the topology in the VM. Adding 0x1f for specific CPU models should not cause any trouble in principle. This property is only enabled for CPU models that already have 0x1f leaf on the Host, so software that originally runs normally on the Host won't encounter issues in the Guest with corresponding CPU model. Conversely, some software that relies on checking 0x1f might have problems in the Guest due to the lack of 0x1f [*]. In summary, adding 0x1f is also intended to further emulate the Host CPU environment. [*]: https://lore.kernel.org/qemu-devel/PH0PR02MB738410511BF51B12DB09BE6CF6AC2@PH0PR02MB7384.namprd02.prod.outlook.com/ Signed-off-by: Manish Mishra <manish.mishra@nutanix.com> Co-authored-by: Xiaoyao Li <xiaoyao.li@intel.com> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com> [Integrated and rebased 2 previous patches (ordered by post time)] Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711104603.1634832-6-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Introduce cache model for YongFengEwan Hai1-0/+104
Add the cache model to YongFeng (v3) to better emulate its environment. Note, although YongFeng v2 was added after v10.0, it was also back ported to v10.0.2. Therefore, the new version (v3) is needed to avoid conflict. The cache model is as follows: --- cache 0 --- cache type = data cache (1) cache level = 0x1 (1) self-initializing cache level = true fully associative cache = false maximum IDs for CPUs sharing cache = 0x0 (0) maximum IDs for cores in pkg = 0x0 (0) system coherency line size = 0x40 (64) physical line partitions = 0x1 (1) ways of associativity = 0x8 (8) number of sets = 0x40 (64) WBINVD/INVD acts on lower caches = false inclusive to lower caches = false complex cache indexing = false number of sets (s) = 64 (size synth) = 32768 (32 KB) --- cache 1 --- cache type = instruction cache (2) cache level = 0x1 (1) self-initializing cache level = true fully associative cache = false maximum IDs for CPUs sharing cache = 0x0 (0) maximum IDs for cores in pkg = 0x0 (0) system coherency line size = 0x40 (64) physical line partitions = 0x1 (1) ways of associativity = 0x10 (16) number of sets = 0x40 (64) WBINVD/INVD acts on lower caches = false inclusive to lower caches = false complex cache indexing = false number of sets (s) = 64 (size synth) = 65536 (64 KB) --- cache 2 --- cache type = unified cache (3) cache level = 0x2 (2) self-initializing cache level = true fully associative cache = false maximum IDs for CPUs sharing cache = 0x0 (0) maximum IDs for cores in pkg = 0x0 (0) system coherency line size = 0x40 (64) physical line partitions = 0x1 (1) ways of associativity = 0x8 (8) number of sets = 0x200 (512) WBINVD/INVD acts on lower caches = false inclusive to lower caches = true complex cache indexing = false number of sets (s) = 512 (size synth) = 262144 (256 KB) --- cache 3 --- cache type = unified cache (3) cache level = 0x3 (3) self-initializing cache level = true fully associative cache = false maximum IDs for CPUs sharing cache = 0x0 (0) maximum IDs for cores in pkg = 0x0 (0) system coherency line size = 0x40 (64) physical line partitions = 0x1 (1) ways of associativity = 0x10 (16) number of sets = 0x2000 (8192) WBINVD/INVD acts on lower caches = true inclusive to lower caches = true complex cache indexing = false number of sets (s) = 8192 (size synth) = 8388608 (8 MB) --- cache 4 --- cache type = no more caches (0) Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Ewan Hai <ewanhai-oc@zhaoxin.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711104603.1634832-5-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Introduce cache model for SapphireRapidsZhao Liu1-0/+96
Add the cache model to SapphireRapids (v4) to better emulate its environment. The cache model is based on SapphireRapids-SP (Scalable Performance): --- cache 0 --- cache type = data cache (1) cache level = 0x1 (1) self-initializing cache level = true fully associative cache = false maximum IDs for CPUs sharing cache = 0x1 (1) maximum IDs for cores in pkg = 0x3f (63) system coherency line size = 0x40 (64) physical line partitions = 0x1 (1) ways of associativity = 0xc (12) number of sets = 0x40 (64) WBINVD/INVD acts on lower caches = false inclusive to lower caches = false complex cache indexing = false number of sets (s) = 64 (size synth) = 49152 (48 KB) --- cache 1 --- cache type = instruction cache (2) cache level = 0x1 (1) self-initializing cache level = true fully associative cache = false maximum IDs for CPUs sharing cache = 0x1 (1) maximum IDs for cores in pkg = 0x3f (63) system coherency line size = 0x40 (64) physical line partitions = 0x1 (1) ways of associativity = 0x8 (8) number of sets = 0x40 (64) WBINVD/INVD acts on lower caches = false inclusive to lower caches = false complex cache indexing = false number of sets (s) = 64 (size synth) = 32768 (32 KB) --- cache 2 --- cache type = unified cache (3) cache level = 0x2 (2) self-initializing cache level = true fully associative cache = false maximum IDs for CPUs sharing cache = 0x1 (1) maximum IDs for cores in pkg = 0x3f (63) system coherency line size = 0x40 (64) physical line partitions = 0x1 (1) ways of associativity = 0x10 (16) number of sets = 0x800 (2048) WBINVD/INVD acts on lower caches = false inclusive to lower caches = false complex cache indexing = false number of sets (s) = 2048 (size synth) = 2097152 (2 MB) --- cache 3 --- cache type = unified cache (3) cache level = 0x3 (3) self-initializing cache level = true fully associative cache = false maximum IDs for CPUs sharing cache = 0x7f (127) maximum IDs for cores in pkg = 0x3f (63) system coherency line size = 0x40 (64) physical line partitions = 0x1 (1) ways of associativity = 0xf (15) number of sets = 0x10000 (65536) WBINVD/INVD acts on lower caches = false inclusive to lower caches = false complex cache indexing = true number of sets (s) = 65536 (size synth) = 62914560 (60 MB) --- cache 4 --- cache type = no more caches (0) Suggested-by: Tejus GK <tejus.gk@nutanix.com> Suggested-by: Jason Zeng <jason.zeng@intel.com> Suggested-by: "Daniel P . Berrangé" <berrange@redhat.com> Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Reviewed-by: Tao Su <tao1.su@linux.intel.com> Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711104603.1634832-4-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Introduce cache model for GraniteRapidsZhao Liu1-0/+96
Add the cache model to GraniteRapids (v3) to better emulate its environment. The cache model is based on GraniteRapids-SP (Scalable Performance): --- cache 0 --- cache type = data cache (1) cache level = 0x1 (1) self-initializing cache level = true fully associative cache = false maximum IDs for CPUs sharing cache = 0x1 (1) maximum IDs for cores in pkg = 0x3f (63) system coherency line size = 0x40 (64) physical line partitions = 0x1 (1) ways of associativity = 0xc (12) number of sets = 0x40 (64) WBINVD/INVD acts on lower caches = false inclusive to lower caches = false complex cache indexing = false number of sets (s) = 64 (size synth) = 49152 (48 KB) --- cache 1 --- cache type = instruction cache (2) cache level = 0x1 (1) self-initializing cache level = true fully associative cache = false maximum IDs for CPUs sharing cache = 0x1 (1) maximum IDs for cores in pkg = 0x3f (63) system coherency line size = 0x40 (64) physical line partitions = 0x1 (1) ways of associativity = 0x10 (16) number of sets = 0x40 (64) WBINVD/INVD acts on lower caches = false inclusive to lower caches = false complex cache indexing = false number of sets (s) = 64 (size synth) = 65536 (64 KB) --- cache 2 --- cache type = unified cache (3) cache level = 0x2 (2) self-initializing cache level = true fully associative cache = false maximum IDs for CPUs sharing cache = 0x1 (1) maximum IDs for cores in pkg = 0x3f (63) system coherency line size = 0x40 (64) physical line partitions = 0x1 (1) ways of associativity = 0x10 (16) number of sets = 0x800 (2048) WBINVD/INVD acts on lower caches = false inclusive to lower caches = false complex cache indexing = false number of sets (s) = 2048 (size synth) = 2097152 (2 MB) --- cache 3 --- cache type = unified cache (3) cache level = 0x3 (3) self-initializing cache level = true fully associative cache = false maximum IDs for CPUs sharing cache = 0xff (255) maximum IDs for cores in pkg = 0x3f (63) system coherency line size = 0x40 (64) physical line partitions = 0x1 (1) ways of associativity = 0x10 (16) number of sets = 0x48000 (294912) WBINVD/INVD acts on lower caches = false inclusive to lower caches = false complex cache indexing = true number of sets (s) = 294912 (size synth) = 301989888 (288 MB) --- cache 4 --- cache type = no more caches (0) Suggested-by: Tejus GK <tejus.gk@nutanix.com> Suggested-by: Jason Zeng <jason.zeng@intel.com> Suggested-by: "Daniel P . Berrangé" <berrange@redhat.com> Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Reviewed-by: Tao Su <tao1.su@linux.intel.com> Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711104603.1634832-3-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Introduce cache model for SierraForestZhao Liu1-0/+96
Add the cache model to SierraForest (v3) to better emulate its environment. The cache model is based on SierraForest-SP (Scalable Performance): --- cache 0 --- cache type = data cache (1) cache level = 0x1 (1) self-initializing cache level = true fully associative cache = false maximum IDs for CPUs sharing cache = 0x0 (0) maximum IDs for cores in pkg = 0x3f (63) system coherency line size = 0x40 (64) physical line partitions = 0x1 (1) ways of associativity = 0x8 (8) number of sets = 0x40 (64) WBINVD/INVD acts on lower caches = false inclusive to lower caches = false complex cache indexing = false number of sets (s) = 64 (size synth) = 32768 (32 KB) --- cache 1 --- cache type = instruction cache (2) cache level = 0x1 (1) self-initializing cache level = true fully associative cache = false maximum IDs for CPUs sharing cache = 0x0 (0) maximum IDs for cores in pkg = 0x3f (63) system coherency line size = 0x40 (64) physical line partitions = 0x1 (1) ways of associativity = 0x8 (8) number of sets = 0x80 (128) WBINVD/INVD acts on lower caches = false inclusive to lower caches = false complex cache indexing = false number of sets (s) = 128 (size synth) = 65536 (64 KB) --- cache 2 --- cache type = unified cache (3) cache level = 0x2 (2) self-initializing cache level = true fully associative cache = false maximum IDs for CPUs sharing cache = 0x7 (7) maximum IDs for cores in pkg = 0x3f (63) system coherency line size = 0x40 (64) physical line partitions = 0x1 (1) ways of associativity = 0x10 (16) number of sets = 0x1000 (4096) WBINVD/INVD acts on lower caches = false inclusive to lower caches = false complex cache indexing = false number of sets (s) = 4096 (size synth) = 4194304 (4 MB) --- cache 3 --- cache type = unified cache (3) cache level = 0x3 (3) self-initializing cache level = true fully associative cache = false maximum IDs for CPUs sharing cache = 0x1ff (511) maximum IDs for cores in pkg = 0x3f (63) system coherency line size = 0x40 (64) physical line partitions = 0x1 (1) ways of associativity = 0xc (12) number of sets = 0x24000 (147456) WBINVD/INVD acts on lower caches = false inclusive to lower caches = false complex cache indexing = true number of sets (s) = 147456 (size synth) = 113246208 (108 MB) --- cache 4 --- cache type = no more caches (0) Suggested-by: Tejus GK <tejus.gk@nutanix.com> Suggested-by: Jason Zeng <jason.zeng@intel.com> Suggested-by: "Daniel P . Berrangé" <berrange@redhat.com> Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Reviewed-by: Tao Su <tao1.su@linux.intel.com> Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711104603.1634832-2-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Use a unified cache_info in X86CPUStateZhao Liu2-128/+27
At present, all cases using the cache model (CPUID 0x2, 0x4, 0x80000005, 0x80000006 and 0x8000001D leaves) have been verified to be able to select either cache_info_intel or cache_info_amd based on the vendor. Therefore, further merge cache_info_intel and cache_info_amd into a unified cache_info in X86CPUState, and during its initialization, set different legacy cache models based on the vendor. Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com> Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711102143.1622339-19-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2025-07-12i386/cpu: Select legacy cache model based on vendor in CPUID 0x8000001DZhao Liu1-5/+21
As preparation for merging cache_info_cpuid4 and cache_info_amd in X86CPUState, set legacy cache model based on vendor in the CPUID 0x8000001D leaf. For AMD CPU, select legacy AMD cache model (in cache_info_amd) as the default cache model like before, otherwise, select legacy Intel cache model (in cache_info_cpuid4). In fact, for Intel (and Zhaoxin) CPU, this change is safe because the extended CPUID level supported by Intel is up to 0x80000008. So Intel Guest doesn't have this 0x8000001D leaf. Although someone could bump "xlevel" up to 0x8000001D for Intel Guest, it's meaningless and this is undefined behavior. This leaf should be considered reserved, but the SDM does not explicitly state this. So, there's no need to specifically use vendor_cpuid_only_v2 to fix anything, as it doesn't even qualify as a fix since nothing is currently broken. Therefore, it is acceptable to select the default legacy cache model based on the vendor. For the CPUID 0x8000001D leaf, in X86CPUState, a unified cache_info is enough. It only needs to be initialized and configured with the corresponding legacy cache model based on the vendor. Tested-by: Yi Lai <yi1.lai@intel.com> Signed-off-by: Zhao Liu <zhao1.liu@intel.com> Link: https://lore.kernel.org/r/20250711102143.1622339-18-zhao1.liu@intel.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>