diff options
Diffstat (limited to 'results/classifier/118/none/1880355')
| -rw-r--r-- | results/classifier/118/none/1880355 | 165 |
1 files changed, 165 insertions, 0 deletions
diff --git a/results/classifier/118/none/1880355 b/results/classifier/118/none/1880355 new file mode 100644 index 00000000..c5f9dbf1 --- /dev/null +++ b/results/classifier/118/none/1880355 @@ -0,0 +1,165 @@ +peripherals: 0.701 +hypervisor: 0.684 +risc-v: 0.659 +x86: 0.657 +KVM: 0.651 +VMM: 0.639 +vnc: 0.627 +i386: 0.623 +TCG: 0.620 +mistranslation: 0.618 +virtual: 0.613 +permissions: 0.607 +graphic: 0.595 +ppc: 0.573 +device: 0.566 +semantic: 0.565 +performance: 0.563 +user-level: 0.560 +arm: 0.558 +debug: 0.557 +architecture: 0.549 +files: 0.549 +assembly: 0.548 +PID: 0.546 +register: 0.543 +socket: 0.533 +boot: 0.530 +kernel: 0.523 +network: 0.515 + +Length restrictions for fw_cfg_dma_transfer? + +For me, this takes close to 3 minutes at 100% CPU: +echo "outl 0x518 0x9596ffff" | ./i386-softmmu/qemu-system-i386 -M q35 -m 32 -nographic -accel qtest -monitor none -serial none -qtest stdio + +#0 phys_page_find (d=0x606000035d80, addr=136728041144404) at /exec.c:338 +#1 address_space_lookup_region (d=0x606000035d80, addr=136728041144404, resolve_subpage=true) at /exec.c:363 +#2 address_space_translate_internal (d=0x606000035d80, addr=136728041144404, xlat=0x7fff1fc0d070, plen=0x7fff1fc0d090, resolve_subpage=true) at /exec.c:382 +#3 flatview_do_translate (fv=0x606000035d20, addr=136728041144404, xlat=0x7fff1fc0d070, plen_out=0x7fff1fc0d090, page_mask_out=0x0, is_write=true, is_mmio=true, target_as=0x7fff1fc0ce10, attrs=...) + pment/qemu/exec.c:520 +#4 flatview_translate (fv=0x606000035d20, addr=136728041144404, xlat=0x7fff1fc0d070, plen=0x7fff1fc0d090, is_write=true, attrs=...) at /exec.c:586 +#5 flatview_write_continue (fv=0x606000035d20, addr=136728041144404, attrs=..., ptr=0x7fff1fc0d660, len=172, addr1=136728041144400, l=172, mr=0x557fd54e77e0 <io_mem_unassigned>) + pment/qemu/exec.c:3160 +#6 flatview_write (fv=0x606000035d20, addr=136728041144064, attrs=..., buf=0x7fff1fc0d660, len=512) at /exec.c:3177 +#7 address_space_write (as=0x557fd54e7a00 <address_space_memory>, addr=136728041144064, attrs=..., buf=0x7fff1fc0d660, len=512) at /exec.c:3271 +#8 dma_memory_set (as=0x557fd54e7a00 <address_space_memory>, addr=136728041144064, c=0 '\000', len=1378422272) at /dma-helpers.c:31 +#9 fw_cfg_dma_transfer (s=0x61a000001e80) at /hw/nvram/fw_cfg.c:400 +#10 fw_cfg_dma_mem_write (opaque=0x61a000001e80, addr=4, value=4294940309, size=4) at /hw/nvram/fw_cfg.c:467 +#11 memory_region_write_accessor (mr=0x61a000002200, addr=4, value=0x7fff1fc0e3d0, size=4, shift=0, mask=4294967295, attrs=...) at /memory.c:483 +#12 access_with_adjusted_size (addr=4, value=0x7fff1fc0e3d0, size=4, access_size_min=1, access_size_max=8, access_fn=0x557fd2288c80 <memory_region_write_accessor>, mr=0x61a000002200, attrs=...) + pment/qemu/memory.c:539 +#13 memory_region_dispatch_write (mr=0x61a000002200, addr=4, data=4294940309, op=MO_32, attrs=...) at /memory.c:1476 +#14 flatview_write_continue (fv=0x606000035f00, addr=1304, attrs=..., ptr=0x7fff1fc0ec40, len=4, addr1=4, l=4, mr=0x61a000002200) at /exec.c:3137 +#15 flatview_write (fv=0x606000035f00, addr=1304, attrs=..., buf=0x7fff1fc0ec40, len=4) at /exec.c:3177 +#16 address_space_write (as=0x557fd54e7bc0 <address_space_io>, addr=1304, attrs=..., buf=0x7fff1fc0ec40, len=4) at /exec.c:3271 + + +It looks like fw_cfg_dma_transfer gets the address(136728041144064) and length(1378422272) for the read from the value provided as input 4294940309 (0xFFFF9695) which lands in pcbios. Should there be any limits on the length of guest-memory that fw_cfg should populate? +Found by libfuzzer + +On 5/24/20 6:12 AM, Alexander Bulekov wrote: +> Public bug reported: +> +> For me, this takes close to 3 minutes at 100% CPU: +> echo "outl 0x518 0x9596ffff" | ./i386-softmmu/qemu-system-i386 -M q35 -m 32 -nographic -accel qtest -monitor none -serial none -qtest stdio +> +> #0 phys_page_find (d=0x606000035d80, addr=136728041144404) at /exec.c:338 +> #1 address_space_lookup_region (d=0x606000035d80, addr=136728041144404, resolve_subpage=true) at /exec.c:363 +> #2 address_space_translate_internal (d=0x606000035d80, addr=136728041144404, xlat=0x7fff1fc0d070, plen=0x7fff1fc0d090, resolve_subpage=true) at /exec.c:382 +> #3 flatview_do_translate (fv=0x606000035d20, addr=136728041144404, xlat=0x7fff1fc0d070, plen_out=0x7fff1fc0d090, page_mask_out=0x0, is_write=true, is_mmio=true, target_as=0x7fff1fc0ce10, attrs=...) +> pment/qemu/exec.c:520 +> #4 flatview_translate (fv=0x606000035d20, addr=136728041144404, xlat=0x7fff1fc0d070, plen=0x7fff1fc0d090, is_write=true, attrs=...) at /exec.c:586 +> #5 flatview_write_continue (fv=0x606000035d20, addr=136728041144404, attrs=..., ptr=0x7fff1fc0d660, len=172, addr1=136728041144400, l=172, mr=0x557fd54e77e0 <io_mem_unassigned>) +> pment/qemu/exec.c:3160 +> #6 flatview_write (fv=0x606000035d20, addr=136728041144064, attrs=..., buf=0x7fff1fc0d660, len=512) at /exec.c:3177 +> #7 address_space_write (as=0x557fd54e7a00 <address_space_memory>, addr=136728041144064, attrs=..., buf=0x7fff1fc0d660, len=512) at /exec.c:3271 +> #8 dma_memory_set (as=0x557fd54e7a00 <address_space_memory>, addr=136728041144064, c=0 '\000', len=1378422272) at /dma-helpers.c:31 +> #9 fw_cfg_dma_transfer (s=0x61a000001e80) at /hw/nvram/fw_cfg.c:400 +> #10 fw_cfg_dma_mem_write (opaque=0x61a000001e80, addr=4, value=4294940309, size=4) at /hw/nvram/fw_cfg.c:467 +> #11 memory_region_write_accessor (mr=0x61a000002200, addr=4, value=0x7fff1fc0e3d0, size=4, shift=0, mask=4294967295, attrs=...) at /memory.c:483 +> #12 access_with_adjusted_size (addr=4, value=0x7fff1fc0e3d0, size=4, access_size_min=1, access_size_max=8, access_fn=0x557fd2288c80 <memory_region_write_accessor>, mr=0x61a000002200, attrs=...) +> pment/qemu/memory.c:539 +> #13 memory_region_dispatch_write (mr=0x61a000002200, addr=4, data=4294940309, op=MO_32, attrs=...) at /memory.c:1476 +> #14 flatview_write_continue (fv=0x606000035f00, addr=1304, attrs=..., ptr=0x7fff1fc0ec40, len=4, addr1=4, l=4, mr=0x61a000002200) at /exec.c:3137 +> #15 flatview_write (fv=0x606000035f00, addr=1304, attrs=..., buf=0x7fff1fc0ec40, len=4) at /exec.c:3177 +> #16 address_space_write (as=0x557fd54e7bc0 <address_space_io>, addr=1304, attrs=..., buf=0x7fff1fc0ec40, len=4) at /exec.c:3271 +> +> +> It looks like fw_cfg_dma_transfer gets the address(136728041144064) and length(1378422272) for the read from the value provided as input 4294940309 (0xFFFF9695) which lands in pcbios. Should there be any limits on the length of guest-memory that fw_cfg should populate? + +It looks to me a normal behavior for a DMA device. DMA devices have a +different address space view than the CPUs. +Also note the fw_cfg is a generic device, not restricted to the x86 arch. + +Maybe this function could use dma_memory_valid() to skip unassigned regions? + + + +On Sun, 24 May 2020 at 11:30, Philippe Mathieu-Daudé <email address hidden> wrote: +> It looks to me a normal behavior for a DMA device. DMA devices have a +> different address space view than the CPUs. +> Also note the fw_cfg is a generic device, not restricted to the x86 arch. + +In an ideal world all our DMA devices would use some kind of common +framework or design pattern so they didn't hog all the CPU +and/or spend minutes with the BQL held if the guest requests +an enormous-sized DMA. In practice many of them just have +a simple "loop until the DMA transfer is complete" implementation... + +thanks +-- PMM + + +On 5/24/20 3:40 PM, Peter Maydell wrote: +> On Sun, 24 May 2020 at 11:30, Philippe Mathieu-Daudé <email address hidden> wrote: +>> It looks to me a normal behavior for a DMA device. DMA devices have a +>> different address space view than the CPUs. +>> Also note the fw_cfg is a generic device, not restricted to the x86 arch. +> +> In an ideal world all our DMA devices would use some kind of common +> framework or design pattern so they didn't hog all the CPU +> and/or spend minutes with the BQL held if the guest requests +> an enormous-sized DMA. In practice many of them just have +> a simple "loop until the DMA transfer is complete" implementation... + +Is this framework already implemented in the hidden dma-helpers.c? + +Apparently this file was written for BlockBackend, but the code seems +rather generic. + + + +On 24/05/2020 14:40, Peter Maydell wrote: + +> On Sun, 24 May 2020 at 11:30, Philippe Mathieu-Daudé <email address hidden> wrote: +>> It looks to me a normal behavior for a DMA device. DMA devices have a +>> different address space view than the CPUs. +>> Also note the fw_cfg is a generic device, not restricted to the x86 arch. +> +> In an ideal world all our DMA devices would use some kind of common +> framework or design pattern so they didn't hog all the CPU +> and/or spend minutes with the BQL held if the guest requests +> an enormous-sized DMA. In practice many of them just have +> a simple "loop until the DMA transfer is complete" implementation... + +One of the problems with the PPC Mac DBDMA emulation is that the controller is +effectively a mini-CPU that runs its own programs for transferring data to/from memory. + +Currently this is done as a QEMU BH which means for larger transfers the emulated CPU +can be paused for a not insignificant amount of time until the program performing the +transfer finishes. I've always wondered if this should be running in a separate +thread to reduce its impact. + + +ATB, + +Mark. + + +Can you still reproduce this problem with the current git version of QEMU? ... for me, the command now returns immediately. + +This no longer causes timeout/excessive CPU usage for me. Probably fixed + +Ok, thanks for checking! Closing now. + |