diff options
Diffstat (limited to '')
| -rw-r--r-- | results/classifier/108/other/1102 | 53 | ||||
| -rw-r--r-- | results/classifier/108/other/1102027 | 127 |
2 files changed, 180 insertions, 0 deletions
diff --git a/results/classifier/108/other/1102 b/results/classifier/108/other/1102 new file mode 100644 index 00000000..a62aa9be --- /dev/null +++ b/results/classifier/108/other/1102 @@ -0,0 +1,53 @@ +permissions: 0.885 +debug: 0.874 +files: 0.864 +other: 0.853 +device: 0.850 +performance: 0.847 +PID: 0.841 +graphic: 0.834 +vnc: 0.827 +socket: 0.824 +boot: 0.807 +network: 0.782 +KVM: 0.774 +semantic: 0.764 + +qemu-user: zero_bss might raise segfault when segment is not writable +Description of problem: +When a PT_LOAD segment with the following attributes presented in the user program, +* MemSiz > FileSiz +* NOT Writable + +qemu-aarch64 will crash with segment fault running it. + + + + +in [linux-user/elfload.c: bss_zero](https://gitlab.com/qemu-project/qemu/-/blob/master/linux-user/elfload.c#L2097), the exceeded part is zero'ed without checking if it is writable +``` + if (host_start < host_map_start) { + memset((void *)host_start, 0, host_map_start - host_start); + } +``` +Steps to reproduce: +1. ./qemu-aarch64 ./X.so +Additional information: +readelf output of X.so +``` +Program Headers: + Type Offset VirtAddr PhysAddr FileSiz MemSiz Flags Align + PHDR 0x0000000000000040 0x0000000000000040 0x0000000000000040 0x0000000000000230 0x0000000000000230 R E 0x8 + LOAD 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000110270 0x00000000001c94e0 R E 0x10000 + LOAD 0x0000000000129bd0 0x00000000001d9bd0 0x00000000001d9bd0 0x0000000000000438 0x00000000000004c0 RW 0x10000 + LOAD 0x000000000013a008 0x00000000001ea008 0x00000000001ea008 0x0000000000017bd0 0x0000000000017bd0 RW 0x10000 + LOAD 0x0000000000161bd8 0x0000000000211bd8 0x0000000000211bd8 0x000000000000f740 0x000000000000f740 RW 0x10000 + DYNAMIC 0x0000000000161e60 0x0000000000211e60 0x0000000000211e60 0x00000000000001e0 0x00000000000001e0 RW 0x8 + INTERP 0x0000000000089410 0x0000000000089410 0x0000000000089410 0x0000000000000015 0x0000000000000015 R 0x1 + [Requesting program interpreter: /system/bin/linker64] + NOTE 0x000000000013dbc8 0x00000000001edbc8 0x00000000001edbc8 0x0000000000000011 0x0000000000000011 R 0x1 + GNU_EH_FRAME 0x00000000001c86a4 0x00000000001c86a4 0x00000000001c86a4 0x00000000000002dc 0x00000000000002dc R 0x4 + GNU_STACK 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000 RW 0x10 +``` + +X.so: https://drive.google.com/file/d/1A7mkWRcK2BKkpeevt8T6FVLg-t6mWdgi/view?usp=sharing diff --git a/results/classifier/108/other/1102027 b/results/classifier/108/other/1102027 new file mode 100644 index 00000000..223ca8f3 --- /dev/null +++ b/results/classifier/108/other/1102027 @@ -0,0 +1,127 @@ +permissions: 0.701 +semantic: 0.697 +device: 0.668 +graphic: 0.659 +other: 0.655 +PID: 0.651 +performance: 0.642 +network: 0.638 +socket: 0.626 +debug: 0.613 +files: 0.612 +vnc: 0.607 +KVM: 0.603 +boot: 0.595 + +QED Time travel + +This night after a reboot of a VM, it was back to 8 Oct. 2012, i've lost all data between 8 Oct 2012 and now. I've check the QED file and mount on another VM, all seems OK. + +This QED has a raw backfile with the base OS (debian) shared with many others QED. It has NO snapshot. + +QEMU emulator version 1.1.2 + +Does anyone have a hint ? + +On Sun, Jan 20, 2013 at 11:54:33AM -0000, Mekza wrote: +> Public bug reported: +> +> This night after a reboot of a VM, it was back to 8 Oct. 2012, i've lost +> all data between 8 Oct 2012 and now. I've check the QED file and mount +> on another VM, all seems OK. + +Hi Mekza, +Are you able to reproduce this issue or did this happen once only? + +Does "all seems OK" mean that you still only saw the files from 8 Oct +2012 when you attached the image to another VM? + +> Does anyone have a hint ? + +There's not a lot of information here to go by. If the issue is +reproducible it should be possible to collect more information, starting +with the steps to reproduce the issue. + +Stefan + + +On Tue, Jan 29, 2013 at 1:46 PM, Stefan Hajnoczi <<email address hidden> +> wrote: + +> On Sun, Jan 20, 2013 at 11:54:33AM -0000, Mekza wrote: +> > Public bug reported: +> > +> > This night after a reboot of a VM, it was back to 8 Oct. 2012, i've lost +> > all data between 8 Oct 2012 and now. I've check the QED file and mount +> > on another VM, all seems OK. +> +> Hi Mekza, +> Are you able to reproduce this issue or did this happen once only? +> + +Hi Stefan, + +I already had this bug once and after a unmeasurable period (days or even +weeks) and a reboot of the VM, the FS was back. + + +> +> Does "all seems OK" mean that you still only saw the files from 8 Oct +> 2012 when you attached the image to another VM? +> + +"all seems OK" means the QED file is not corrupted and consistent. I still +have files from 8 Oct 2012. + + +> +> > Does anyone have a hint ? +> +> There's not a lot of information here to go by. If the issue is +> reproducible it should be possible to collect more information, starting +> with the steps to reproduce the issue. +> + +I know, i'm gonna copy the QED and then convert to RAW and attach it to a +new VM. I'll keep you in touch. + +> +> Stefan +> +> -- +> You received this bug notification because you are subscribed to the bug +> report. +> https://bugs.launchpad.net/bugs/1102027 +> +> Title: +> QED Time travel +> +> Status in QEMU: +> New +> +> Bug description: +> This night after a reboot of a VM, it was back to 8 Oct. 2012, i've +> lost all data between 8 Oct 2012 and now. I've check the QED file and +> mount on another VM, all seems OK. +> +> This QED has a raw backfile with the base OS (debian) shared with many +> others QED. It has NO snapshot. +> +> QEMU emulator version 1.1.2 +> +> Does anyone have a hint ? +> +> To manage notifications about this bug go to: +> https://bugs.launchpad.net/qemu/+bug/1102027/+subscriptions +> + + + +-- +Martin-Zack Mekkaoui + + +Have you ever been able to reproduce this issue? + +[Expired for QEMU because there has been no activity for 60 days.] + |