diff options
Diffstat (limited to '')
| -rw-r--r-- | results/classifier/108/other/814 | 52 | ||||
| -rw-r--r-- | results/classifier/108/other/814222 | 260 |
2 files changed, 312 insertions, 0 deletions
diff --git a/results/classifier/108/other/814 b/results/classifier/108/other/814 new file mode 100644 index 000000000..3e7c2e197 --- /dev/null +++ b/results/classifier/108/other/814 @@ -0,0 +1,52 @@ +graphic: 0.824 +other: 0.814 +permissions: 0.811 +debug: 0.808 +device: 0.787 +semantic: 0.785 +network: 0.773 +socket: 0.772 +PID: 0.769 +files: 0.764 +KVM: 0.746 +boot: 0.745 +vnc: 0.743 +performance: 0.738 + +On Windows, qcow2 is corrupted on expansion +Description of problem: +On Windows, the qcow2 loses blocks on account of which the filesystem withing is corrupted as data is copied to it, just the same way as in #727 VHDX is corrupted on expansion on both Linux/Windows. + +After filing a bug for WNBD https://github.com/cloudbase/wnbd/issues/63 , I was suggested to try raw and qcow2. In the process I found that qcow2 is also affected. But it is also true that the kernel-5.15.4 ... 5.15.13 series have also been buggy https://bugzilla.kernel.org/show_bug.cgi?id=215460 . +On Linux, qcow2 never showed any signs of corruption. +On Windows, however, qcow2 does corrupt. + +It is possible that, as Linux is so much more efficient at files and disk-IO, the kernel-block-code, qemu-block-code and qemu-qcow2-code do not hit the bug, and so the corruption does not show up as easily in Linux. Windows, being a little slower at this, might be causing the bug to show up in this qcow2 test. Possibly, the issue more likely to show up on slower machines. I am using an 2013-era intel-4rth gen i7-4700mq Haswell machine. + +It is possible that, the resolution for this issue and that for #727 could be the same or very closely related. The bug may not be in qcow2.c or vhdx.c but maybe in the qemu/block subsystem. If the data-block that arrives from the VM-interface/nbd-interface which has to be written to file, but never gets to the virtual-disk code, not allocated and written to, then the data-block is lost. +Steps to reproduce: +1. Prepare virtual-disk1 as empty qcow2. In my-setup, the qcow2 file resides on an 150 GiB ExFAT partition on 512 GiB SSD. I use ExFAT as the ExFAT-filesystem does not have a concept of sparse files, eliminating that factor from troubleshooting. + ```qemu-img.exe create -f qcow2 H:\gkpics01.qcow2 99723771904``` +2. Prepare virtual-disk2 VHDX with synthetic generated data (sgdata). Scriptlets to recreate sgdata are described in https://gitlab.com/qemu-project/qemu/-/issues/727#note_739930694 . In my-setup, the vhdx file resides on an 1 TiB NTFS partition on a 2 TiB HDD. +3. Start qemu with arguments as given above. +4. Inside VM, boot and bringup livecd desktop, close the installer and open a terminal +5. Use gdisk to put an ext4 partition on /dev/sda +6. Put ext4 partition on sda1 ```mkfs.ext4 -L fs_gkpics01 /dev/sda1``` +7. Create mount directories ```mkdir /mnt/a /mnt/b``` +8. Mount the empty partition from virtual-disk-1 ```mount -t ext4 /dev/sda1 /mnt/a``` +9. Mount the sgdata partition from virtual-disk-2 ```mount.ntfs-3g /dev/sdb2 /mnt/b``` or ```mount -t ntfs3 /dev/sdb2 /mnt/b``` +10. Keep a terminal tab open with ```dmesg -w``` running +11. Rsync sgdata ```( sdate=`date` ; cd /mnt/b ; rsync -avH ./photos001 /mnt/a | tee /tmp/rst.txt ; echo $sdate ; date )``` +12. Check sha256sum ```( sdate=`date` ; cd /mnt/a/photos001 ; shas256sum -c ./find.CHECKSUM --quiet ; echo $sdate ; date )``` + corruption will show even without needing to unmount-remount or reboot-remount. + +- About 1.4 GiB free-space left on the ext4 partition. +- Compared to #727, The number of files corrupted are less ``` sha256sum: WARNING: 31 computed checksums did not match ``` +- After, VM guest OS warm reboot, a recheck of the sha256sum shows the same 31 files as corrupted +- After, qemu poweroff, restart qemu, VM guest OS cold boot, a recheck of the sha256sum shows the same 31 files as corrupted +- df shows: sda1 has 95271336 1k-blocks, of which 88840860 are used, 1544820 available, 99% used. The numbers don't add up. Either file-blocks are lost in lost-clusters or the ext4-filesystem has a large journal or the file-system-metadata is too large, or the ext4-filesystem has large cluster-size which results in inefficient space usage. +- An ```unmount /dev/sda1 ; fsck -y /dev/sda1 ; mount -t ext4 /dev/sda1 /mnt/a``` did not find any lost clusters. + +The reason I don't think this is a kernel bug, is because the raw-file as virtual-disk-1 doesn't show this issue. Also, it happens regardless of whether sgdata is on ntfs-3g or ntfs3-paragon. +Additional information: + diff --git a/results/classifier/108/other/814222 b/results/classifier/108/other/814222 new file mode 100644 index 000000000..4eb096740 --- /dev/null +++ b/results/classifier/108/other/814222 @@ -0,0 +1,260 @@ +debug: 0.675 +semantic: 0.641 +other: 0.555 +permissions: 0.529 +PID: 0.513 +graphic: 0.500 +KVM: 0.499 +files: 0.421 +vnc: 0.413 +socket: 0.410 +device: 0.408 +boot: 0.391 +network: 0.382 +performance: 0.307 + +kvm cannot use vhd files over 127GB + +The primary use case for using vhds with KVM is to perform a conversion to a raw image file so that one could move from Hyper-V to Linux-KVM. See more on this http://blog.allanglesit.com/2011/03/linux-kvm-migrating-hyper-v-vhd-images-to-kvm/ + +# kvm-img convert -f raw -O vpc /root/file.vhd /root/file.img + +The above works great if you have VHDs smaller than 127GB, however if it is larger, then no error is generated during the conversion process, but it appears to just process up to that 127GB barrier and no more. Also of note. VHDs can also be run directly using KVM if they are smaller than 127GB. VHDs can be read and function well using virtualbox as well as hyper-v, so I suspect the problem lies not with the VHD format (since that has a 2TB limitation). But instead with how qemu-kvm is interpreting them. + +BORING VERSION INFO: +# cat /etc/issue +Ubuntu 11.04 \n \l +# uname -rmiv +2.6.38-8-server #42-Ubuntu SMP Mon Apr 11 03:49:04 UTC 2011 x86_64 x86_64 +# apt-cache policy kvm +kvm: + Installed: 1:84+dfsg-0ubuntu16+0.14.0+noroms+0ubuntu4.1 + Candidate: 1:84+dfsg-0ubuntu16+0.14.0+noroms+0ubuntu4.1 + Version table: + *** 1:84+dfsg-0ubuntu16+0.14.0+noroms+0ubuntu4.1 0 + 500 http://apt.sonosite.com/ubuntu/ natty-updates/main amd64 Packages + 500 http://apt.sonosite.com/ubuntu/ natty-security/main amd64 Packages + 100 /var/lib/dpkg/status + 1:84+dfsg-0ubuntu16+0.14.0+noroms+0ubuntu4 0 + 500 http://apt.sonosite.com/ubuntu/ natty/main amd64 Packages +# apt-cache policy libvirt-bin +libvirt-bin: + Installed: 0.8.8-1ubuntu6.2 + Candidate: 0.8.8-1ubuntu6.2 + Version table: + *** 0.8.8-1ubuntu6.2 0 + 500 http://apt.sonosite.com/ubuntu/ natty-updates/main amd64 Packages + 500 http://apt.sonosite.com/ubuntu/ natty-security/main amd64 Packages + 100 /var/lib/dpkg/status + 0.8.8-1ubuntu6 0 + 500 http://apt.sonosite.com/ubuntu/ natty/main amd64 Packages + +qemu-img version 0.14.0 + +# vboxmanage -v +4.0.12r72916 + + +REPRODUCTION STEPS (requires Windows 7 or Windows 2008 R2 with < 1GB of free space) + +## WINDOWS MACHINE ## + +Use Computer Management > Disk Management +-Create 2 VHD files, both dynamically expanding 120GB and 140GB respectively. +-Do not initialize or format. + +These files will need to be transferred to an Ubuntu KVM machine (pscp is what I used but usb would work as well). + +## UBUNTU KVM MACHINE ## + +# ls *.vhd +120g-dyn.vhd 140g-dyn.vhd +# kvm-img info 120g-dyn.vhd +image: 120g-dyn.vhd +file format: vpc +virtual size: 120G (128847052800 bytes) +disk size: 244K +# kvm-img info 140g-dyn.vhd +image: 140g-dyn.vhd +file format: vpc +virtual size: 127G (136899993600 bytes) +disk size: 284K +# kvm-img info 120g-dyn.vhd | grep "virtual size" +virtual size: 120G (128847052800 bytes) +# kvm-img info 140g-dyn.vhd | grep "virtual size" +virtual size: 127G (136899993600 bytes) + +Regardless of how big the second vhd is I always get a virtual size of 127G + +Now if we use virtualbox to view the vhds we see markedly different results. + +# VBoxManage showhdinfo 120g-dyn.vhd +UUID: e63681e0-ff12-4114-85de-7d13562b36db +Accessible: yes +Logical size: 122880 MBytes +Current size on disk: 0 MBytes +Type: normal (base) +Storage format: VHD +Format variant: dynamic default +Location: /root/120g-dyn.vhd +# VBoxManage showhdinfo 140g-dyn.vhd +UUID: 94531905-46b4-469f-bb44-7a7d388fb38f +Accessible: yes +Logical size: 143360 MBytes +Current size on disk: 0 MBytes +Type: normal (base) +Storage format: VHD +Format variant: dynamic default +Location: /root/140g-dyn.vhd + +# kvm-img convert -f vpc -O raw 120g-dyn.vhd 120g-dyn.img +# +# kvm-img convert -f vpc -O raw 140g-dyn.vhd 140g-dyn.img +# + +# kvm-img info 120g-dyn.img +image: 120g-dyn.img +file format: raw +virtual size: 120G (128847052800 bytes) +disk size: 0 +# kvm-img info 120g-dyn.img | grep "virtual size" +virtual size: 120G (128847052800 bytes) +# kvm-img info 140g-dyn.img +image: 140g-dyn.img +file format: raw +virtual size: 127G (136899993600 bytes) +disk size: 0 +# kvm-img info 140g-dyn.img | grep "virtual size" +virtual size: 127G (136899993600 bytes) + +Notice after the conversion the raw image will the taken on the partial geometry of the vhd, thus rendering that image invalid. + +vboxmanage has a clonehd option which allows you to successfully convert vhd to a raw image, which kvm then sees properly. + +For giggles I also tested with a 140GB fixed VHD (in the same manner as above) and it displayed the virtual size as correct, so a good work around is to convert your VHDs to fixed, then use kvm-img to convert them. + +Keep in mind that these reproduction steps will not have a file systems therefore no valid data, if there were for example NTFS with a text file the problem would still occur but more importantly the guest trying to use it would not be able to open the disk because of it being unable to find the final sector. + +So long story short I think we are dealing with 2 issues here. + +1) kvm not being able to deal with dynamic VHD files larger than 127GB +2) kvm-img not generating an error when it "fails" at converting or displaying information on dynamic VHDs larger than 127GB. The error should be something like "qemu-kvm does not support dynamic VHD files larger that 127GB..." + +Thanks for taking the time to submit this bug. + +It looks like the 127G limit is known. A recent patch went in to help with the symptom you are seeing, but unfortunately it only makes the failure detectable :) It's a start at least. + +The following commit should be pulled in: + +commit 6e9ea0c0629fe25723494a19498bedf4b781cbfa +Author: aurel32 <aurel32@c046a42c-6fe2-441c-8c8c-71466251a162> +Date: Wed Apr 15 14:42:46 2009 +0000 + + block-vpc: Don't silently create smaller image than requested + + The algorithm from the VHD specification for CHS calculation silently limits + images to 127 GB which may confuse a user who requested a larger image. Better + output an error message and abort. + + Signed-off-by: Kevin Wolf <email address hidden> + Signed-off-by: Aurelien Jarno <email address hidden> + + git-svn-id: svn://svn.savannah.nongnu.org/qemu/trunk@7109 c046a42c-6fe2-441c-8c8c-71466251a162 + + +Hm, no - that patch is already in natty and oneiric's qemu-kvm. + +I'm afraid I'll need time to find a place where I can reproduce this. + +As converting to fixed is listed as a workaround, I'm changing the priority to low per priority definitions. + +Are the priority definitions documented somewhere? + +I personally think you were right on when you had the priority at medium. + +Primarily because of the fact that no error is generated. It can't just silently fail. If it generated an error (so that people knew they needed to look for a work around) then I would agree that fixing the bug itself would be a low priority, as the work around is simple for anyone to implement. + +@Matthew, + +yes, the definitions are at https://wiki.ubuntu.com/Bugs/Importance. + +Your reasoning makes sense. I'll bump it back up, thanks. + +This could be tough to reproduce, as I don't seem to have a way to +create a vpc image > 127G in the first place: + +root@ip-10-36-186-165:/mnt# qemu-img create -f vpc vpc2.img 130G +Formatting 'vpc2.img', fmt=vpc size=139586437120 +qemu-img: The image size is too large for file format 'vpc' + +root@ip-10-36-186-165:/mnt# qemu-img create -f raw raw1.img 130G +Formatting 'raw1.img', fmt=raw size=139586437120 +root@ip-10-36-186-165:/mnt# qemu-img convert -f raw -O vpc raw1.img vpc1.img +qemu-img: The image size is too large for file format 'vpc' + +root@ip-10-36-186-165:/mnt# qemu-img create -f vpc vpc1.img 127G +Formatting 'vpc1.img', fmt=vpc size=136365211648 +root@ip-10-36-186-165:/mnt# qemu-img convert -f vpc -O raw vpc1.img raw2.img +root@ip-10-36-186-165:/mnt# qemu-img info raw2.img +image: raw2.img +file format: raw +virtual size: 127G (136365219840 bytes) +disk size: 0 + + + + +This is a dynamically expanding VHD file created using the reproduction steps above on Windows 7. This one is 120GB and converts correctly. + +This has not been formatted or even initialized. + +"kvm-img info 120g-dynamic.vhd" shows the proper geometry. + + +This is a dynamically expanding VHD file created using the reproduction steps above on Windows 7. This one is 140GB and silently errors on conversion. + +This has not been formatted or even initialized. + +"kvm-img info 140g-dynamic.vhd" does not show the proper geometry. + +I have attached a couple of VHDs that I created with Windows 7. These should be helpful in your reproduction. + +Also looking at your notes it looks like that previous patch which was committed only affected the creation. So perhaps the same sort of check can be incorporated into the conversion process as well, so that you don't have the silent error. + +@Matthew + +thanks for the attachments. + + + +This bug was fixed in the package qemu-kvm - 0.14.1+noroms-0ubuntu5 + +--------------- +qemu-kvm (0.14.1+noroms-0ubuntu5) oneiric; urgency=low + + * debian/patches/vpc.patch: detect vpc files which are too big + (LP: #814222) + -- Serge Hallyn <email address hidden> Mon, 12 Sep 2011 11:28:36 -0500 + +I came here from : http://lists.gnu.org/archive/html/qemu-devel/2011-07/msg02806.html + +Actually, I experience an issue which may be useful to you. + +I have a corrupted VHD file (as explained in that thread : https://forums.virtualbox.org/viewtopic.php?f=7&t=20614 ). I wanted to follow that procedure to solve my issue : + + qemu-img convert -O raw miimagen.vhd miimagen.bin + VBoxManage convertdd miimagen.bin miimagen.vdi + + +but qemu-img convert -O raw miimagen.vhd miimagen.bin triggers the qemu-img: Could not open 'img.VHD': File too large error message. + +Since, my file is 52,6 Go and the output is raw format. I guess it should not trigger that exception? or is that the normal behavior? Is there a way to bypass this limit? I use qemu-img 1.0 version. + +Hope it can help your development (and it can help me back) +Thanks, +simon + +Looks like Serge's fix has been included here: +http://git.qemu.org/?p=qemu.git;a=commitdiff;h=efc8243d00ab4cf4fa05a9b +... so let's close this bug now. + |