diff options
Diffstat (limited to 'classification_output/02/other/32484936')
| -rw-r--r-- | classification_output/02/other/32484936 | 224 |
1 files changed, 0 insertions, 224 deletions
diff --git a/classification_output/02/other/32484936 b/classification_output/02/other/32484936 deleted file mode 100644 index 204f2e264..000000000 --- a/classification_output/02/other/32484936 +++ /dev/null @@ -1,224 +0,0 @@ -other: 0.856 -semantic: 0.832 -instruction: 0.829 -boot: 0.810 -mistranslation: 0.794 - -[Qemu-devel] [Snapshot Bug?]Qcow2 meta data corruption - -Hi all, -There was a problem about qcow2 image file happened in my serval vms and I could not figure it out, -so have to ask for some help. -Here is the thing: -At first, I found there were some data corruption in a vm, so I did qemu-img check to all my vms. -parts of check report: -3-Leaked cluster 2926229 refcount=1 reference=0 -4-Leaked cluster 3021181 refcount=1 reference=0 -5-Leaked cluster 3021182 refcount=1 reference=0 -6-Leaked cluster 3021183 refcount=1 reference=0 -7-Leaked cluster 3021184 refcount=1 reference=0 -8-ERROR cluster 3102547 refcount=3 reference=4 -9-ERROR cluster 3111536 refcount=3 reference=4 -10-ERROR cluster 3113369 refcount=3 reference=4 -11-ERROR cluster 3235590 refcount=10 reference=11 -12-ERROR cluster 3235591 refcount=10 reference=11 -423-Warning: cluster offset=0xc000c00020000 is after the end of the image file, can't properly check refcounts. -424-Warning: cluster offset=0xc000c000c0000 is after the end of the image file, can't properly check refcounts. -425-Warning: cluster offset=0xc0001000c0000 is after the end of the image file, can't properly check refcounts. -426-Warning: cluster offset=0xc000c000c0000 is after the end of the image file, can't properly check refcounts. -427-Warning: cluster offset=0xc000c000c0000 is after the end of the image file, can't properly check refcounts. -428-Warning: cluster offset=0xc000c000c0000 is after the end of the image file, can't properly check refcounts. -429-Warning: cluster offset=0xc000c000c0000 is after the end of the image file, can't properly check refcounts. -430-Warning: cluster offset=0xc000c00010000 is after the end of the image file, can't properly check refcounts. -After a futher look in, I found two l2 entries point to the same cluster, and that was found in serval qcow2 image files of different vms. -Like this: -table entry conflict (with our qcow2 check -tool): -a table offset : 0x00000093f7080000 level : 2, l1 table entry 100, l2 table entry 7 -b table offset : 0x00000093f7080000 level : 2, l1 table entry 5, l2 table entry 7 -table entry conflict : -a table offset : 0x00000000a01e0000 level : 2, l1 table entry 100, l2 table entry 19 -b table offset : 0x00000000a01e0000 level : 2, l1 table entry 5, l2 table entry 19 -table entry conflict : -a table offset : 0x00000000a01d0000 level : 2, l1 table entry 100, l2 table entry 18 -b table offset : 0x00000000a01d0000 level : 2, l1 table entry 5, l2 table entry 18 -table entry conflict : -a table offset : 0x00000000a01c0000 level : 2, l1 table entry 100, l2 table entry 17 -b table offset : 0x00000000a01c0000 level : 2, l1 table entry 5, l2 table entry 17 -table entry conflict : -a table offset : 0x00000000a01b0000 level : 2, l1 table entry 100, l2 table entry 16 -b table offset : 0x00000000a01b0000 level : 2, l1 table entry 5, l2 table entry 16 -I think the problem is relate to the snapshot create, delete. But I cant reproduce it . -Can Anyone give a hint about how this happen? -Qemu version 2.0.1, I download the source code and make install it. -Qemu parameters: -/usr/bin/kvm -chardev socket,id=qmp,path=/var/run/qemu-server/5855899639838.qmp,server,nowait -mon chardev=qmp,mode=control -vnc :0,websocket,to=200 -enable-kvm -pidfile /var/run/qemu-server/5855899639838.pid -daemonize -name yfMailSvr-200.200.0.14 -smp sockets=1,cores=4 -cpu core2duo,hv_spinlocks=0xffff,hv_relaxed,hv_time,hv_vapic,+sse4.1,+sse4.2,+x2apic,+erms,+smep,+fsgsbase,+f16c,+dca,+pcid,+pdcm,+xtpr,+ht,+ss,+acpi,+ds -nodefaults -vga cirrus -k en-us -boot menu=on,splash-time=8000 -m 8192 -usb -drive if=none,id=drive-ide0,media=cdrom,aio=native -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0 -drive file=/sf/data/36b82a720d3a278001ba904e80c20c13e_ecf4bbbf3e94/images/host-ecf4bbbf3e94/784f3f08532a/yfMailSvr-200.200.0.14.vm/vm-disk-1.qcow2,if=none,id=drive-virtio1,cache=none,aio=native -device virtio-blk-pci,drive=drive-virtio1,id=virtio1,bus=pci.0,addr=0xb -drive file=/sf/data/36b82a720d3a278001ba904e80c20c13e_ecf4bbbf3e94/images/host-ecf4bbbf3e94/784f3f08532a/yfMailSvr-200.200.0.14.vm/vm-disk-2.qcow2,if=none,id=drive-virtio2,cache=none,aio=native -device virtio-blk-pci,drive=drive-virtio2,id=virtio2,bus=pci.0,addr=0xc,bootindex=101 -netdev type=tap,id=net0,ifname=585589963983800,script=/sf/etc/kvm/vtp-bridge,vhost=on,vhostforce=on -device virtio-net-pci,romfile=,mac=FE:FC:FE:F0:AB:BA,netdev=net0,bus=pci.0,addr=0x12,id=net0 -rtc driftfix=slew,clock=rt,base=localtime -global kvm-pit.lost_tick_policy=discard -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -Thanks -Sangfor VT. -leijian - -Hi all, -There was a problem about qcow2 image file happened in my serval vms and I could not figure it out, -so have to ask for some help. -Here is the thing: -At first, I found there were some data corruption in a vm, so I did qemu-img check to all my vms. -parts of check report: -3-Leaked cluster 2926229 refcount=1 reference=0 -4-Leaked cluster 3021181 refcount=1 reference=0 -5-Leaked cluster 3021182 refcount=1 reference=0 -6-Leaked cluster 3021183 refcount=1 reference=0 -7-Leaked cluster 3021184 refcount=1 reference=0 -8-ERROR cluster 3102547 refcount=3 reference=4 -9-ERROR cluster 3111536 refcount=3 reference=4 -10-ERROR cluster 3113369 refcount=3 reference=4 -11-ERROR cluster 3235590 refcount=10 reference=11 -12-ERROR cluster 3235591 refcount=10 reference=11 -423-Warning: cluster offset=0xc000c00020000 is after the end of the image file, can't properly check refcounts. -424-Warning: cluster offset=0xc000c000c0000 is after the end of the image file, can't properly check refcounts. -425-Warning: cluster offset=0xc0001000c0000 is after the end of the image file, can't properly check refcounts. -426-Warning: cluster offset=0xc000c000c0000 is after the end of the image file, can't properly check refcounts. -427-Warning: cluster offset=0xc000c000c0000 is after the end of the image file, can't properly check refcounts. -428-Warning: cluster offset=0xc000c000c0000 is after the end of the image file, can't properly check refcounts. -429-Warning: cluster offset=0xc000c000c0000 is after the end of the image file, can't properly check refcounts. -430-Warning: cluster offset=0xc000c00010000 is after the end of the image file, can't properly check refcounts. -After a futher look in, I found two l2 entries point to the same cluster, and that was found in serval qcow2 image files of different vms. -Like this: -table entry conflict (with our qcow2 check -tool): -a table offset : 0x00000093f7080000 level : 2, l1 table entry 100, l2 table entry 7 -b table offset : 0x00000093f7080000 level : 2, l1 table entry 5, l2 table entry 7 -table entry conflict : -a table offset : 0x00000000a01e0000 level : 2, l1 table entry 100, l2 table entry 19 -b table offset : 0x00000000a01e0000 level : 2, l1 table entry 5, l2 table entry 19 -table entry conflict : -a table offset : 0x00000000a01d0000 level : 2, l1 table entry 100, l2 table entry 18 -b table offset : 0x00000000a01d0000 level : 2, l1 table entry 5, l2 table entry 18 -table entry conflict : -a table offset : 0x00000000a01c0000 level : 2, l1 table entry 100, l2 table entry 17 -b table offset : 0x00000000a01c0000 level : 2, l1 table entry 5, l2 table entry 17 -table entry conflict : -a table offset : 0x00000000a01b0000 level : 2, l1 table entry 100, l2 table entry 16 -b table offset : 0x00000000a01b0000 level : 2, l1 table entry 5, l2 table entry 16 -I think the problem is relate to the snapshot create, delete. But I cant reproduce it . -Can Anyone give a hint about how this happen? -Qemu version 2.0.1, I download the source code and make install it. -Qemu parameters: -/usr/bin/kvm -chardev socket,id=qmp,path=/var/run/qemu-server/5855899639838.qmp,server,nowait -mon chardev=qmp,mode=control -vnc :0,websocket,to=200 -enable-kvm -pidfile /var/run/qemu-server/5855899639838.pid -daemonize -name yfMailSvr-200.200.0.14 -smp sockets=1,cores=4 -cpu core2duo,hv_spinlocks=0xffff,hv_relaxed,hv_time,hv_vapic,+sse4.1,+sse4.2,+x2apic,+erms,+smep,+fsgsbase,+f16c,+dca,+pcid,+pdcm,+xtpr,+ht,+ss,+acpi,+ds -nodefaults -vga cirrus -k en-us -boot menu=on,splash-time=8000 -m 8192 -usb -drive if=none,id=drive-ide0,media=cdrom,aio=native -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0 -drive file=/sf/data/36b82a720d3a278001ba904e80c20c13e_ecf4bbbf3e94/images/host-ecf4bbbf3e94/784f3f08532a/yfMailSvr-200.200.0.14.vm/vm-disk-1.qcow2,if=none,id=drive-virtio1,cache=none,aio=native -device virtio-blk-pci,drive=drive-virtio1,id=virtio1,bus=pci.0,addr=0xb -drive file=/sf/data/36b82a720d3a278001ba904e80c20c13e_ecf4bbbf3e94/images/host-ecf4bbbf3e94/784f3f08532a/yfMailSvr-200.200.0.14.vm/vm-disk-2.qcow2,if=none,id=drive-virtio2,cache=none,aio=native -device virtio-blk-pci,drive=drive-virtio2,id=virtio2,bus=pci.0,addr=0xc,bootindex=101 -netdev type=tap,id=net0,ifname=585589963983800,script=/sf/etc/kvm/vtp-bridge,vhost=on,vhostforce=on -device virtio-net-pci,romfile=,mac=FE:FC:FE:F0:AB:BA,netdev=net0,bus=pci.0,addr=0x12,id=net0 -rtc driftfix=slew,clock=rt,base=localtime -global kvm-pit.lost_tick_policy=discard -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -Thanks -Sangfor VT. -leijian - -Am 03.04.2015 um 12:04 hat leijian geschrieben: -> -Hi all, -> -> -There was a problem about qcow2 image file happened in my serval vms and I -> -could not figure it out, -> -so have to ask for some help. -> -[...] -> -I think the problem is relate to the snapshot create, delete. But I cant -> -reproduce it . -> -Can Anyone give a hint about how this happen? -How did you create/delete your snapshots? - -More specifically, did you take care to never access your image from -more than one process (except if both are read-only)? It happens -occasionally that people use 'qemu-img snapshot' while the VM is -running. This is wrong and can corrupt the image. - -Kevin - -On 04/07/2015 03:33 AM, Kevin Wolf wrote: -> -More specifically, did you take care to never access your image from -> -more than one process (except if both are read-only)? It happens -> -occasionally that people use 'qemu-img snapshot' while the VM is -> -running. This is wrong and can corrupt the image. -Since this has been done by more than one person, I'm wondering if there -is something we can do in the qcow2 format itself to make it harder for -the casual user to cause corruption. Maybe if we declare some bit or -extension header for an image open for writing, which other readers can -use as a warning ("this image is being actively modified; reading it may -fail"), and other writers can use to deny access ("another process is -already modifying this image"), where a writer should set that bit -before writing anything else in the file, then clear it on exit. Of -course, you'd need a way to override the bit to actively clear it to -recover from the case of a writer dying unexpectedly without resetting -it normally. And it won't help the case of a reader opening the file -first, followed by a writer, where the reader could still get thrown off -track. - -Or maybe we could document in the qcow2 format that all readers and -writers should attempt to obtain the appropriate flock() permissions [or -other appropriate advisory locking scheme] over the file header, so that -cooperating processes that both use advisory locking will know when the -file is in use by another process. - --- -Eric Blake eblake redhat com +1-919-301-3266 -Libvirt virtualization library -http://libvirt.org -signature.asc -Description: -OpenPGP digital signature - - -I created/deleted the snapshot by using qmp command "snapshot_blkdev_internal"/"snapshot_delete_blkdev_internal", and for avoiding the case you mentioned above, I have added the flock() permission in the qemu_open(). -Here is the test of doing qemu-img snapshot to a running vm: -Diskfile:/sf/data/36c81f660e38b3b001b183da50b477d89_f8bc123b3e74/images/host-f8bc123b3e74/4a8d8728fcdc/Devried30030.vm/vm-disk-1.qcow2 is used! errno=Resource temporarily unavailable -Does the two cluster entry happen to be the same because of the refcount of using cluster decrease to 0 unexpectedly and is allocated again? -If it was not accessing the image from more than one process, any other exceptions I can test for? -Thanks -leijian -From: -Eric Blake -Date: -2015-04-07 23:27 -To: -Kevin Wolf -; -leijian -CC: -qemu-devel -; -stefanha -Subject: -Re: [Qemu-devel] [Snapshot Bug?]Qcow2 meta data -corruption -On 04/07/2015 03:33 AM, Kevin Wolf wrote: -> More specifically, did you take care to never access your image from -> more than one process (except if both are read-only)? It happens -> occasionally that people use 'qemu-img snapshot' while the VM is -> running. This is wrong and can corrupt the image. -Since this has been done by more than one person, I'm wondering if there -is something we can do in the qcow2 format itself to make it harder for -the casual user to cause corruption. Maybe if we declare some bit or -extension header for an image open for writing, which other readers can -use as a warning ("this image is being actively modified; reading it may -fail"), and other writers can use to deny access ("another process is -already modifying this image"), where a writer should set that bit -before writing anything else in the file, then clear it on exit. Of -course, you'd need a way to override the bit to actively clear it to -recover from the case of a writer dying unexpectedly without resetting -it normally. And it won't help the case of a reader opening the file -first, followed by a writer, where the reader could still get thrown off -track. -Or maybe we could document in the qcow2 format that all readers and -writers should attempt to obtain the appropriate flock() permissions [or -other appropriate advisory locking scheme] over the file header, so that -cooperating processes that both use advisory locking will know when the -file is in use by another process. --- -Eric Blake eblake redhat com +1-919-301-3266 -Libvirt virtualization library http://libvirt.org - |