summary refs log tree commit diff stats
path: root/hw/core/qdev-properties-system.c (unfollow)
Commit message (Collapse)AuthorFilesLines
2024-07-16Revert "qemu-char: do not operate on sources from finalize callbacks"Sergey Dyasli1-14/+5
This reverts commit 2b316774f60291f57ca9ecb6a9f0712c532cae34. After 038b4217884c ("Revert "chardev: use a child source for qio input source"") we've been observing the "iwp->src == NULL" assertion triggering periodically during the initial capabilities querying by libvirtd. One of possible backtraces: Thread 1 (Thread 0x7f16cd4f0700 (LWP 43858)): 0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50 1 0x00007f16c6c21e65 in __GI_abort () at abort.c:79 2 0x00007f16c6c21d39 in __assert_fail_base at assert.c:92 3 0x00007f16c6c46e86 in __GI___assert_fail (assertion=assertion@entry=0x562e9bcdaadd "iwp->src == NULL", file=file@entry=0x562e9bcdaac8 "../chardev/char-io.c", line=line@entry=99, function=function@entry=0x562e9bcdab10 <__PRETTY_FUNCTION__.20549> "io_watch_poll_finalize") at assert.c:101 4 0x0000562e9ba20c2c in io_watch_poll_finalize (source=<optimized out>) at ../chardev/char-io.c:99 5 io_watch_poll_finalize (source=<optimized out>) at ../chardev/char-io.c:88 6 0x00007f16c904aae0 in g_source_unref_internal () from /lib64/libglib-2.0.so.0 7 0x00007f16c904baf9 in g_source_destroy_internal () from /lib64/libglib-2.0.so.0 8 0x0000562e9ba20db0 in io_remove_watch_poll (source=0x562e9d6720b0) at ../chardev/char-io.c:147 9 remove_fd_in_watch (chr=chr@entry=0x562e9d5f3800) at ../chardev/char-io.c:153 10 0x0000562e9ba23ffb in update_ioc_handlers (s=0x562e9d5f3800) at ../chardev/char-socket.c:592 11 0x0000562e9ba2072f in qemu_chr_fe_set_handlers_full at ../chardev/char-fe.c:279 12 0x0000562e9ba207a9 in qemu_chr_fe_set_handlers at ../chardev/char-fe.c:304 13 0x0000562e9ba2ca75 in monitor_qmp_setup_handlers_bh (opaque=0x562e9d4c2c60) at ../monitor/qmp.c:509 14 0x0000562e9bb6222e in aio_bh_poll (ctx=ctx@entry=0x562e9d4c2f20) at ../util/async.c:216 15 0x0000562e9bb4de0a in aio_poll (ctx=0x562e9d4c2f20, blocking=blocking@entry=true) at ../util/aio-posix.c:722 16 0x0000562e9b99dfaa in iothread_run (opaque=0x562e9d4c26f0) at ../iothread.c:63 17 0x0000562e9bb505a4 in qemu_thread_start (args=0x562e9d4c7ea0) at ../util/qemu-thread-posix.c:543 18 0x00007f16c70081ca in start_thread (arg=<optimized out>) at pthread_create.c:479 19 0x00007f16c6c398d3 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 io_remove_watch_poll(), which makes sure that iwp->src is NULL, calls g_source_destroy() which finds that iwp->src is not NULL in the finalize callback. This can only happen if another thread has managed to trigger io_watch_poll_prepare() callback in the meantime. Move iwp->src destruction back to the finalize callback to prevent the described race, and also remove the stale comment. The deadlock glib bug was fixed back in 2010 by b35820285668 ("gmain: move finalization of GSource outside of context lock"). Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Sergey Dyasli <sergey.dyasli@nutanix.com> Link: https://lore.kernel.org/r/20240712092659.216206-1-sergey.dyasli@nutanix.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-16i386/sev: Don't allow automatic fallback to legacy KVM_SEV*_INITMichael Roth3-22/+83
Currently if the 'legacy-vm-type' property of the sev-guest object is 'on', QEMU will attempt to use the newer KVM_SEV_INIT2 kernel interface in conjunction with the newer KVM_X86_SEV_VM and KVM_X86_SEV_ES_VM KVM VM types. This can lead to measurement changes if, for instance, an SEV guest was created on a host that originally had an older kernel that didn't support KVM_SEV_INIT2, but is booted on the same host later on after the host kernel was upgraded. Instead, if legacy-vm-type is 'off', QEMU should fail if the KVM_SEV_INIT2 interface is not provided by the current host kernel. Modify the fallback handling accordingly. In the future, VMSA features and other flags might be added to QEMU which will require legacy-vm-type to be 'off' because they will rely on the newer KVM_SEV_INIT2 interface. It may be difficult to convey to users what values of legacy-vm-type are compatible with which features/options, so as part of this rework, switch legacy-vm-type to a tri-state OnOffAuto option. 'auto' in this case will automatically switch to using the newer KVM_SEV_INIT2, but only if it is required to make use of new VMSA features or other options only available via KVM_SEV_INIT2. Defining 'auto' in this way would avoid inadvertantly breaking compatibility with older kernels since it would only be used in cases where users opt into newer features that are only available via KVM_SEV_INIT2 and newer kernels, and provide better default behavior than the legacy-vm-type=off behavior that was previously in place, so make it the default for 9.1+ machine types. Cc: Daniel P. Berrangé <berrange@redhat.com> Cc: Paolo Bonzini <pbonzini@redhat.com> cc: kvm@vger.kernel.org Signed-off-by: Michael Roth <michael.roth@amd.com> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Link: https://lore.kernel.org/r/20240710041005.83720-1-michael.roth@amd.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2024-07-14hw/ufs: Fix mcq register range check logicJeuk Kim1-2/+14
The function ufs_is_mcq_reg() and ufs_is_mcq_op_reg() only evaluated the range of the mcq_reg and mcq_op_reg offset, which is defined as a constant. Therefore, it was possible for them to return true even though the ufs device is configured to not support the mcq. This could cause ufs_mmio_read()/ufs_mmio_write() to result in Null-pointer-dereference. So fix it. Resolves: #2428 Fixes: 5c079578d2e4 ("hw/ufs: Add support MCQ of UFSHCI 4.0") Reported-by: Zheyu Ma <zheyuma97@gmail.com> Signed-off-by: Jeuk Kim <jeuk20.kim@samsung.com> Reviewed-by: Minwoo Im <minwoo.im@samsung.com>
2024-07-12docs: remove Sphinx 1.x compatibility codeJohn Snow4-101/+19
In general, the Use_SSI workaround is no longer needed, and neither is the pre-1.6 logging shim for kerneldoc. Signed-off-by: John Snow <jsnow@redhat.com> Acked-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Message-id: 20240703175235.239004-3-jsnow@redhat.com [rebased on top of origin/master. --js] Signed-off-by: John Snow <jsnow@redhat.com>
2024-07-12Python: bump minimum sphinx version to 3.4.3John Snow2-5/+4
With RHEL 8 support retired (It's been two years since RHEL9 released), our very oldest build platform version of Sphinx is now 3.4.3; and keeping backwards compatibility for versions as old as v1.6 when using domain extensions is a lot of work we don't need to do. This patch is motivated by my work creating a new QAPI domain, which unlike the dbus documentation, cannot be allowed to regress by creating a "dummy" doc when operating under older sphinx versions. Easier is to raise our minimum version as far as we can push it forwards, reducing my burden in creating cross-compatibility hacks and patches. A sampling of sphinx versions from various distributions, courtesy https://repology.org/project/python:sphinx/versions Alpine 3.16: v4.3.0 (QEMU support ended 2024-05-23) Alpine 3.17: v5.3.0 Alpine 3.18: v6.1.3 Alpine 3.19: v6.2.1 Ubuntu 20.04 LTS: EOL Ubuntu 22.04 LTS: v4.3.2 Ubuntu 22.10: EOL Ubuntu 23.04: EOL Ubuntu 23.10: v5.3.0 Ubuntu 24.04 LTS: v7.2.6 Debian 11: v3.4.3 (QEMU support ends 2024-07-xx) Debian 12: v5.3.0 Fedora 38: EOL Fedora 39: v6.2.1 Fedora 40: v7.2.6 CentOS Stream 8: v1.7.6 (QEMU support ended 2024-05-17) CentOS Stream 9: v3.4.3 OpenSUSE Leap 15.4: EOL OpenSUSE Leap 15.5: 2.3.1, 4.2.0 and 7.2.6 RHEL9 / CentOS Stream 9 becomes the new defining factor in staying at Sphinx 3.4.3 due to downstream offline build requirements that force us to use platform Sphinx instead of newer packages from PyPI. Signed-off-by: John Snow <jsnow@redhat.com> Acked-by: Paolo Bonzini <pbonzini@redhat.com> Acked-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Message-id: 20240703175235.239004-2-jsnow@redhat.com Signed-off-by: John Snow <jsnow@redhat.com>
2024-07-12python: enable testing for 3.13John Snow2-1/+3
Python 3.13 is in beta and Fedora 41 is preparing to make it the default system interpreter; enable testing for it. (In the event problems develop prior to release, it should only impact the check-python-tox job, which is not run by default and is allowed to fail.) Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Tested-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20240626232230.408004-5-jsnow@redhat.com Signed-off-by: John Snow <jsnow@redhat.com>
2024-07-12iotests: Change imports for Python 3.13John Snow2-4/+12
Python 3.13 isn't out yet, but it's in beta and Fedora is ramping up to make it the default system interpreter for Fedora 41. They moved our cheese for where ContextManager lives; add a conditional to locate it while we support both pre-3.9 and 3.13+. Signed-off-by: John Snow <jsnow@redhat.com> Message-id: 20240626232230.408004-4-jsnow@redhat.com Signed-off-by: John Snow <jsnow@redhat.com>
2024-07-12python: Do not use pylint 3.2.4 with python 3.8John Snow1-0/+1
There is a bug in this version, see: https://github.com/pylint-dev/pylint/issues/9751 Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20240626232230.408004-3-jsnow@redhat.com Signed-off-by: John Snow <jsnow@redhat.com>
2024-07-12python: linter changes for pylint 3.xJohn Snow2-1/+2
New bleeding edge versions, new nits to iron out. This addresses the 'check-python-tox' optional GitLab test, while 'check-python-minreqs' saw no regressions, since it's frozen on an older version of pylint. Fixes: qemu/machine/machine.py:345:52: E0606: Possibly using variable 'sock' before assignment (possibly-used-before-assignment) qemu/utils/qemu_ga_client.py:168:4: R1711: Useless return at end of function or method (useless-return) Signed-off-by: John Snow <jsnow@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20240626232230.408004-2-jsnow@redhat.com Signed-off-by: John Snow <jsnow@redhat.com>
2024-07-12target/loongarch: Fix cpu_reset set wrong CSR_CRMDSong Gao1-3/+3
After cpu_reset, DATF in CSR_CRMD is 0, DATM is 0. See the manual[1] 6.4. [1]: https://github.com/loongson/LoongArch-Documentation/releases/download/2023.04.20/LoongArch-Vol1-v1.10-EN.pdf Signed-off-by: Song Gao <gaosong@loongson.cn> Reviewed-by: Bibo Mao <maobibo@loongson.cn> Message-Id: <20240705021839.1004374-2-gaosong@loongson.cn>
2024-07-12target/loongarch: Set CSR_PRCFG1 and CSR_PRCFG2 valuesSong Gao1-5/+12
We set the value of register CSR_PRCFG3, but left out CSR_PRCFG1 and CSR_PRCFG2. Set CSR_PRCFG1 and CSR_PRCFG2 according to the default values of the physical machine. Signed-off-by: Song Gao <gaosong@loongson.cn> Reviewed-by: Bibo Mao <maobibo@loongson.cn> Message-Id: <20240705021839.1004374-1-gaosong@loongson.cn>
2024-07-12target/loongarch: Remove avail_64 in trans_srai_w() and simplify itFeiyang Chen1-12/+3
Since srai.w is a valid instruction on la32, remove the avail_64 check and simplify trans_srai_w(). Fixes: c0c0461e3a06 ("target/loongarch: Add avail_64 to check la64-only instructions") Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Feiyang Chen <chris.chenfeiyang@gmail.com> Message-Id: <20240628033357.50027-1-chris.chenfeiyang@gmail.com> Signed-off-by: Song Gao <gaosong@loongson.cn>
2024-07-12target/loongarch/kvm: Add software breakpoint supportBibo Mao2-0/+77
With KVM virtualization, debug exception is injected to guest kernel rather than host for normal break intruction. Here hypercall instruction with special code is used for sw breakpoint usage, and detailed instruction comes from kvm kernel with user API KVM_REG_LOONGARCH_DEBUG_INST. Now only software breakpoint is supported, and it is allowed to insert/remove software breakpoint. We can debug guest kernel with gdb method after kernel is loaded, hardware breakpoint will be added in later. Signed-off-by: Bibo Mao <maobibo@loongson.cn> Reviewed-by: Song Gao <gaosong@loongson.cn> Tested-by: Song Gao <gaosong@loongson.cn> Message-Id: <20240607035016.2975799-1-maobibo@loongson.cn> Signed-off-by: Song Gao <gaosong@loongson.cn>
2024-07-12MAINTAINERS: Add myself as a reviewer of LoongArch virt machineJiaxun Yang1-0/+1
I would like to be informed on changes made to the LoongArch virt machine. I'm fairly familiar with Loongson-3 series platform hardware and doing firmwre (U-Boot) development as hobbyist on LoongArch virt platform, so I believe I can give positive review input to changes on that machine. Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com> Reviewed-by: Song Gao <gaosong@loongson.cn> Message-Id: <20240627-ipi-fixes-v1-2-9b061dc28a3a@flygoat.com> Signed-off-by: Song Gao <gaosong@loongson.cn>
2024-07-12hw/loongarch/virt: Remove unused assignmentBibo Mao1-8/+7
There is abuse usage about local variable gap. Remove duplicated assignment and solve Coverity reported error. Resolves: Coverity CID 1546441 Fixes: 3cc451cbce ("hw/loongarch: Refine fwcfg memory map") Signed-off-by: Bibo Mao <maobibo@loongson.cn> Reviewed-by: Song Gao <gaosong@loongson.cn> Message-Id: <20240612033637.167787-1-maobibo@loongson.cn> Signed-off-by: Song Gao <gaosong@loongson.cn>
2024-07-12hw/loongarch: Change the tpm support by defaultXianglai Li2-0/+4
Add devices that support tpm by default, Fixed incomplete tpm acpi table information. Signed-off-by: Xianglai Li <lixianglai@loongson.cn> Reviewed-by: Song Gao <gaosong@loongson.cn> Message-Id: <20240624032300.999157-1-lixianglai@loongson.cn> Signed-off-by: Song Gao <gaosong@loongson.cn>
2024-07-12hw/loongarch/boot.c: fix out-of-bound readingDmitry Frolov1-1/+1
memcpy() is trying to READ 512 bytes from memory, pointed by info->kernel_cmdline, which was (presumable) allocated by g_strdup(""); Found with ASAN, making check with enabled sanitizers. Signed-off-by: Dmitry Frolov <frolov@swemel.ru> Reviewed-by: Song Gao <gaosong@loongson.cn> Message-Id: <20240628123910.577740-1-frolov@swemel.ru> Signed-off-by: Song Gao <gaosong@loongson.cn>
2024-07-12xen: mapcache: Fix unmapping of first entries in bucketsEdgar E. Iglesias1-1/+11
This fixes the clobbering of the entry->next pointer when unmapping the first entry in a bucket of a mapcache. Fixes: 123acd816d ("xen: mapcache: Unmap first entries in buckets") Reported-by: Anthony PERARD <anthony.perard@vates.tech> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com> Reviewed-by: Anthony PERARD <anthony.perard@vates.tech> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
2024-07-12physmem: Bail out qemu_ram_block_from_host() for invalid ram addrsEdgar E. Iglesias1-0/+4
Bail out in qemu_ram_block_from_host() when xen_ram_addr_from_mapcache() does not find an existing mapping. Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
2024-07-12MAINTAINERS: add Edgar as Xen maintainerStefano Stabellini1-0/+1
Add Edgar as Xen subsystem maintainer in QEMU. Edgar has been a QEMU maintainer for years, and has already made key changes to one of the most difficult areas of the Xen subsystem (the mapcache). Edgar volunteered helping us maintain the Xen subsystem in QEMU and we are very happy to welcome him to the team. His knowledge and expertise with QEMU internals will be of great help. Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com> Reviewed-by: Paul Durrant <paul@xen.org> Acked-by: Anthony PERARD <anthony@xenproject.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
2024-07-11hw/nvme: Expand VI/VQ resource to uint32Minwoo Im2-6/+6
VI and VQ resources cover queue resources in each VFs in SR-IOV. Current maximum I/O queue pair size is 0xffff, we can expand them to cover the full number of I/O queue pairs. This patch also fixed Identify Secondary Controller List overflow due to expand of number of secondary controllers. Reviewed-by: Klaus Jensen <k.jensen@samsung.com> Signed-off-by: Minwoo Im <minwoo.im@samsung.com> Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2024-07-11hw/nvme: Allocate sec-ctrl-list as a dynamic arrayMinwoo Im3-10/+5
To prevent further bumping up the number of maximum VF te support, this patch allocates a dynamic array (NvmeCtrl *)->sec_ctrl_list based on number of VF supported by sriov_max_vfs property. Reviewed-by: Klaus Jensen <k.jensen@samsung.com> Signed-off-by: Minwoo Im <minwoo.im@samsung.com> Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2024-07-11hw/nvme: separate identify data for sec. ctrl listMinwoo Im3-21/+22
Secondary controller list for virtualization has been managed by Identify Secondary Controller List data structure with NvmeSecCtrlList where up to 127 secondary controller entries can be managed. The problem hasn't arisen so far because NVME_MAX_VFS has been 127. This patch separated identify data itself from the actual secondary controller list managed by controller to support more than 127 secondary controllers with the following patch. This patch reused NvmeSecCtrlEntry structure to manage all the possible secondary controllers, and copy entries to identify data structure when the command comes in. Reviewed-by: Klaus Jensen <k.jensen@samsung.com> Signed-off-by: Minwoo Im <minwoo.im@samsung.com> Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2024-07-11hw/nvme: add Identify Endurance Group ListMinwoo Im2-0/+23
Commit 73064edfb864 ("hw/nvme: flexible data placement emulation") intorudced NVMe FDP feature to nvme-subsys and nvme-ctrl with a single endurance group #1 supported. This means that controller should return proper identify data to host with Identify Endurance Group List (CNS 19h). But, yes, only just for the endurance group #1. This patch allows host applications to ask for which endurance group is available and utilize FDP through that endurance group. Reviewed-by: Klaus Jensen <k.jensen@samsung.com> Signed-off-by: Minwoo Im <minwoo.im@samsung.com> Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2024-07-11hw/nvme: fix BAR size mismatch of SR-IOV VFMinwoo Im1-4/+15
PF initializes SR-IOV VF BAR0 region in nvme_init_sriov() with bar_size calcaulted by Primary Controller Capability such as VQFRSM and VIFRSM rather than `max_ioqpairs` and `msix_qsize` which is for PF only. In this case, the bar size reported in nvme_init_sriov() by PF and nvme_init_pci() by VF might differ especially with large number of sriov_max_vfs (e.g., 127 which is curret maximum number of VFs). And this reports invalid BAR0 address of VFs to the host operating system so that MMIO access will not be caught properly and, of course, NVMe driver initialization is failed. For example, if we give the following options, BAR size will be initialized by PF with 4K, but VF will try to allocate 8K BAR0 size in nvme_init_pci(). #!/bin/bash nr_vf=$((127)) nr_vq=$(($nr_vf * 2 + 2)) nr_vi=$(($nr_vq / 2 + 1)) nr_ioq=$(($nr_vq + 2)) ... -device nvme,serial=foo,id=nvme0,bus=rp2,subsys=subsys0,mdts=9,msix_qsize=$nr_ioq,max_ioqpairs=$nr_ioq,sriov_max_vfs=$nr_vf,sriov_vq_flexible=$nr_vq,sriov_vi_flexible=$nr_vi \ To fix this issue, this patch modifies the calculation of BAR size in the PF and VF initialization by using different elements: PF: `max_ioqpairs + 1` with `msix_qsize` VF: VQFRSM with VIFRSM Signed-off-by: Minwoo Im <minwoo.im@samsung.com> Reviewed-by: Klaus Jensen <k.jensen@samsung.com> Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2024-07-11hw/nvme: fix number of PIDs for FDP RUH updateVincent Fu1-1/+1
The number of PIDs is in the upper 16 bits of cdw10. So we need to right-shift by 16 bits instead of only a single bit. Fixes: 73064edfb864 ("hw/nvme: flexible data placement emulation") Cc: qemu-stable@nongnu.org Signed-off-by: Vincent Fu <vincent.fu@samsung.com> Reviewed-by: Klaus Jensen <k.jensen@samsung.com> Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2024-07-11hw/nvme: Add support for setting the MQES for the NVMe emulationJohn Berg2-1/+8
The MQES field in the CAP register describes the Maximum Queue Entries Supported for the IO queues of an NVMe controller. Adding a +1 to the value in this field results in the total queue size. A full queue is when a queue of size N contains N - 1 entries, and the minimum queue size is 2. Thus the lowest MQES value is 1. This patch adds the new mqes property to the NVMe emulation which allows a user to specify the maximum queue size by setting this property. This is useful as it enables testing of NVMe controller where the MQES is relatively small. The smallest NVMe queue size supported in NVMe is 2 submission and completion entries, which means that the smallest legal mqes value is 1. The following example shows how the mqes can be set for a the NVMe emulation: -drive id=nvme0,if=none,file=nvme.img,format=raw -device nvme,drive=nvme0,serial=foo,mqes=1 If the mqes property is not provided then the default mqes will still be 0x7ff (the queue size is 2048 entries). Signed-off-by: John Berg <jhnberg@amazon.co.uk> Reviewed-by: Keith Busch <kbusch@kernel.org> Reviewed-by: Klaus Jensen <k.jensen@samsung.com> Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2024-07-11target/arm: Convert PMULL to decodetreeRichard Henderson2-82/+15
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20240709000610.382391-7-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-11target/arm: Convert ADDHN, SUBHN, RADDHN, RSUBHN to decodetreeRichard Henderson2-71/+61
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20240709000610.382391-6-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-11target/arm: Convert SADDW, SSUBW, UADDW, USUBW to decodetreeRichard Henderson2-43/+48
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20240709000610.382391-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-11target/arm: Convert SQDMULL, SQDMLAL, SQDMLSL to decodetreeRichard Henderson2-499/+138
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20240709000610.382391-4-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-11target/arm: Convert SADDL, SSUBL, SABDL, SABAL, and unsigned to decodetreeRichard Henderson2-72/+87
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20240709000610.382391-3-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-11target/arm: Convert SMULL, UMULL, SMLAL, UMLAL, SMLSL, UMLSL to decodetreeRichard Henderson2-50/+156
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20240709000610.382391-2-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-11hw/arm: In STM32L4x5 SOC, connect USART devices to EXTIInès Varhol1-13/+11
The USART devices were previously connecting their outbound IRQs directly to the CPU because the EXTI wasn't handling direct lines interrupts. Now the USART connects to the EXTI inbound GPIOs, and the EXTI connects its IRQs to the CPU. The existing QTest for the USART (tests/qtest/stm32l4x5_usart-test.c) checks that USART1_IRQ in the CPU is pending when expected so it confirms that the connection through the EXTI still works. Signed-off-by: Inès Varhol <ines.varhol@telecom-paris.fr> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20240707085927.122867-4-ines.varhol@telecom-paris.fr Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-11hw/misc: In STM32L4x5 EXTI, handle direct interruptsInès Varhol1-0/+7
The previous implementation for EXTI interrupts only handled "configurable" interrupts, like those originating from STM32L4x5 SYSCFG (the only device currently connected to the EXTI up until now). In order to connect STM32L4x5 USART to the EXTI, this commit adds handling for direct interrupts (interrupts without configurable edge). Signed-off-by: Inès Varhol <ines.varhol@telecom-paris.fr> Message-id: 20240707085927.122867-3-ines.varhol@telecom-paris.fr Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-11hw/misc: In STM32L4x5 EXTI, consolidate 2 constantsInès Varhol2-6/+4
Up until now, the EXTI implementation had 16 inbound GPIOs connected to the 16 outbound GPIOs of STM32L4x5 SYSCFG. The EXTI actually handles 40 lines (namely 5 from STM32L4x5 USART devices which are already implemented in QEMU). In order to connect USART devices to EXTI, this commit consolidates constants `EXTI_NUM_INTERRUPT_OUT_LINES` (40) and `EXTI_NUM_GPIO_EVENT_IN_LINES` (16) into `EXTI_NUM_LINES` (40). Signed-off-by: Inès Varhol <ines.varhol@telecom-paris.fr> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 20240707085927.122867-2-ines.varhol@telecom-paris.fr Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-11accel/tcg: Make TCGCPUOps::cpu_exec_halt mandatoryPeter Maydell2-9/+11
Now that all targets set TCGCPUOps::cpu_exec_halt, we can make it mandatory and remove the fallback handling that calls cpu_has_work. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2024-07-11target: Set TCGCPUOps::cpu_exec_halt to target's has_work implementationPeter Maydell19-1/+24
Currently the TCGCPUOps::cpu_exec_halt method is optional, and if it is not set then the default is to call the CPUClass::has_work method (which has an identical function signature). We would like to make the cpu_exec_halt method mandatory so we can remove the runtime check and fallback handling. In preparation for that, make all the targets which don't need special handling in their cpu_exec_halt set it to their cpu_has_work implementation instead of leaving it unset. (This is every target except for arm and i386.) In the riscv case this requires us to make the function not be local to the source file it's defined in. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2024-07-11target/arm: Set arm_v7m_tcg_ops cpu_exec_halt to arm_cpu_exec_halt()Peter Maydell3-1/+5
In commit a96edb687e76 we set the cpu_exec_halt field of the TCGCPUOps arm_tcg_ops to arm_cpu_exec_halt(), but we left the arm_v7m_tcg_ops struct unchanged. That isn't wrong, because for M-profile FEAT_WFxT doesn't exist and the default handling for "no cpu_exec_halt method" is correct, but it's perhaps a little confusing. We would also like to make setting the cpu_exec_halt method mandatory. Initialize arm_v7m_tcg_ops cpu_exec_halt to the same function we use for A-profile. (On M-profile we never set up the wfxt timer so there is no change in behaviour here.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
2024-07-11target/arm: Use cpu_env in cpu_untagged_addrRichard Henderson1-2/+2
In a completely artifical memset benchmark object_dynamic_cast_assert dominates the profile, even above guest address resolution and the underlying host memset. Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20240702154911.1667418-1-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-11hw/misc/bcm2835_thermal: Fix access size handling in bcm2835_thermal_opsZheyu Ma1-0/+2
The current implementation of bcm2835_thermal_ops sets impl.max_access_size and valid.min_access_size to 4, but leaves impl.min_access_size and valid.max_access_size unset, defaulting to 1. This causes issues when the memory system is presented with an access of size 2 at an offset of 3, leading to an attempt to synthesize it as a pair of byte accesses at offsets 3 and 4, which trips an assert. Additionally, the lack of valid.max_access_size setting causes another issue: the memory system tries to synthesize a read using a 4-byte access at offset 3 even though the device doesn't allow unaligned accesses. This patch addresses these issues by explicitly setting both impl.min_access_size and valid.max_access_size to 4, ensuring proper handling of access sizes. Error log: ERROR:hw/misc/bcm2835_thermal.c:55:bcm2835_thermal_read: code should not be reached Bail out! ERROR:hw/misc/bcm2835_thermal.c:55:bcm2835_thermal_read: code should not be reached Aborted Reproducer: cat << EOF | qemu-system-aarch64 -display \ none -machine accel=qtest, -m 512M -machine raspi3b -m 1G -qtest stdio readw 0x3f212003 EOF Signed-off-by: Zheyu Ma <zheyuma97@gmail.com> Message-id: 20240702154042.3018932-1-zheyuma97@gmail.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-11hw/char/pl011: Avoid division-by-zero in pl011_get_baudrate()Zheyu Ma1-2/+11
In pl011_get_baudrate(), when we calculate the baudrate we can accidentally divide by zero. This happens because although (as the specification requires) we treat UARTIBRD = 0 as invalid, we aren't correctly limiting UARTIBRD and UARTFBRD values to the 16-bit and 6-bit ranges the hardware allows, and so some non-zero values of UARTIBRD can result in a zero divisor. Enforce the correct register field widths on guest writes and on inbound migration to avoid the division by zero. ASAN log: ==2973125==ERROR: AddressSanitizer: FPE on unknown address 0x55f72629b348 (pc 0x55f72629b348 bp 0x7fffa24d0e00 sp 0x7fffa24d0d60 T0) #0 0x55f72629b348 in pl011_get_baudrate hw/char/pl011.c:255:17 #1 0x55f726298d94 in pl011_trace_baudrate_change hw/char/pl011.c:260:33 #2 0x55f726296fc8 in pl011_write hw/char/pl011.c:378:9 Reproducer: cat << EOF | qemu-system-aarch64 -display \ none -machine accel=qtest, -m 512M -machine realview-pb-a8 -qtest stdio writeq 0x1000b024 0xf8000000 EOF Suggested-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Zheyu Ma <zheyuma97@gmail.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20240702155752.3022007-1-zheyuma97@gmail.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2024-07-11target/arm: Allow FPCR bits that aren't in FPSCRPeter Maydell1-21/+35
In order to allow FPCR bits that aren't in the FPSCR (like the new bits that are defined for FEAT_AFP), we need to make sure that writes to the FPSCR only write to the bits of FPCR that are architecturally mapped, and not the others. Implement this with a new function vfp_set_fpcr_masked() which takes a mask of which bits to update. (We could do the same for FPSR, but we leave that until we actually are likely to need it.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240628142347.1283015-10-peter.maydell@linaro.org
2024-07-11target/arm: Rename FPSR_MASK and FPCR_MASK and define them symbolicallyPeter Maydell3-11/+40
Now that we store FPSR and FPCR separately, the FPSR_MASK and FPCR_MASK macros are slightly confusingly named and the comment describing them is out of date. Rename them to FPSCR_FPSR_MASK and FPSCR_FPCR_MASK, document that they are the mask of which FPSCR bits are architecturally mapped to which AArch64 register, and define them symbolically rather than as hex values. (This latter requires defining some extra macros for bits which we haven't previously defined.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240628142347.1283015-9-peter.maydell@linaro.org
2024-07-11target/arm: Rename FPCR_ QC, NZCV macros to FPSR_Peter Maydell5-24/+27
The QC, N, Z, C, V bits live in the FPSR, not the FPCR. Rename the macros that define these bits accordingly. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240628142347.1283015-8-peter.maydell@linaro.org
2024-07-11target/arm: Store FPSR and FPCR in separate CPU state fieldsPeter Maydell6-27/+28
Now that we have refactored the set/get functions so that the FPSCR format is no longer the authoritative one, we can keep FPSR and FPCR in separate CPU state fields. As well as the get and set functions, we also have a scattering of places in the code which directly access vfp.xregs[ARM_VFP_FPSCR] to extract single fields which are stored there. These all change to directly access either vfp.fpsr or vfp.fpcr, depending on the location of the field. (Most commonly, this is the NZCV flags.) We make the field in the CPU state struct 64 bits, because architecturally FPSR and FPCR are 64 bits. However we leave the types of the arguments and return values of the get/set functions as 32 bits, since we don't need to make that change with the current architecture and various callsites would be unable to handle set bits in the high half (for instance the gdbstub protocol assumes they're only 32 bit registers). Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240628142347.1283015-7-peter.maydell@linaro.org
2024-07-11target/arm: Implement store_cpu_field_low32() macroPeter Maydell1-0/+7
We already have a load_cpu_field_low32() to load the low half of a 64-bit CPU struct field to a TCGv_i32; however we haven't yet needed the store equivalent. We'll want that in the next patch, so implement it. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240628142347.1283015-6-peter.maydell@linaro.org
2024-07-11target/arm: Support migration when FPSR/FPCR won't fit in the FPSCRPeter Maydell1-2/+132
To support FPSR and FPCR bits that don't exist in the AArch32 FPSCR view of floating point control and status (such as the FEAT_AFP ones), we need to make sure those bits can be migrated. This commit allows that, whilst maintaining backwards and forwards migration compatibility for CPUs where there are no such bits: On sending: * If either the FPCR or the FPSR include set bits that are not visible in the AArch32 FPSCR view of floating point control/status then we send the FPCR and FPSR as two separate fields in a new cpu/vfp/fpcr_fpsr subsection, and we send a 0 for the old FPSCR field in cpu/vfp * Otherwise, we don't send the fpcr_fpsr subsection, and we send an FPSCR-format value in cpu/vfp as we did previously On receiving: * if we see a non-zero FPSCR field, that is the right information * if we see a fpcr_fpsr subsection then that has the information * if we see neither, then FPSCR/FPCR/FPSR are all zero on the source; cpu_pre_load() ensures the CPU state defaults to that * if we see both, then the migration source is buggy or malicious; either the fpcr_fpsr or the FPSCR will "win" depending which is first in the migration stream; we don't care which that is We make the new FPCR and FPSR on-the-wire data be 64 bits, because architecturally these registers are that wide, and this avoids the need to engage in further migration-compatibility contortions in future if some new architecture revision defines bits in the high half of either register. (We won't ever send the new migration subsection until we add support for a CPU feature which enables setting overlapping FPCR bits, like FEAT_AFP.) Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240628142347.1283015-5-peter.maydell@linaro.org
2024-07-11target/arm: Make vfp_set_fpscr() call vfp_set_{fpcr, fpsr}Peter Maydell2-44/+78
Make vfp_set_fpscr() call vfp_set_fpsr() and vfp_set_fpcr() instead of the other way around. The masking we do when getting and setting vfp.xregs[ARM_VFP_FPSCR] is a little awkward, but we are going to change where we store the underlying FPSR and FPCR information in a later commit, so it will go away then. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240628142347.1283015-4-peter.maydell@linaro.org
2024-07-11target/arm: Make vfp_get_fpscr() call vfp_get_{fpcr, fpsr}Peter Maydell2-21/+37
In AArch32, the floating point control and status bits are all in a single register, FPSCR. In AArch64, these were split into separate FPCR and FPSR registers, but the bit layouts remained the same, with no overlaps, so that you could construct an FPSCR value by ORing FPCR and FPSR, or equivalently could produce FPSR and FPCR by masking an FPSCR value. For QEMU's implementation, we opted to use masking to produce FPSR and FPCR, because we started with an AArch32 implementation of FPSCR. The addition of the (AArch64-only) FEAT_AFP adds new bits to the FPCR which overlap with some bits in the FPSR. This means we'll no longer be able to consider the FPSCR-encoded value as the primary one, but instead need to treat FPSR/FPCR as the primary encoding and construct the FPSCR from those. (This remains possible because the FEAT_AFP bits in FPCR don't appear in the FPSCR.) As the first step in this refactoring, make vfp_get_fpscr() call vfp_get_fpcr() and vfp_get_fpsr(), instead of the other way around. Note that vfp_get_fpcsr_from_host() returns only bits in the FPSR (for the cumulative fp exception bits), so we can simply rename it without needing to add a new function for getting FPCR bits. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240628142347.1283015-3-peter.maydell@linaro.org