summary refs log tree commit diff stats
diff options
context:
space:
mode:
authorKevin Wolf <kwolf@redhat.com>2020-07-07 16:46:29 +0200
committerKevin Wolf <kwolf@redhat.com>2020-07-14 15:18:59 +0200
commitd0ceea88dea053e0c1c038d42ca98782c2e3872d (patch)
tree64388483bf4ed6fb56a4eee157092efb2007e56c
parent4b196cd16dcfb17de19a4121f12aa4ef4bf7925f (diff)
downloadfocaccia-qemu-d0ceea88dea053e0c1c038d42ca98782c2e3872d.tar.gz
focaccia-qemu-d0ceea88dea053e0c1c038d42ca98782c2e3872d.zip
qemu-img map: Don't limit block status request size
Limiting each loop iteration of qemu-img map to 1 GB was arbitrary from
the beginning, though it only cut the maximum in half then because the
interface was a signed 32 bit byte count. These days, bdrv_block_status
supports a 64 bit byte count, so the arbitrary limit is even worse.

On file-posix, bdrv_block_status() eventually maps to SEEK_HOLE and
SEEK_DATA, which don't support a limit, but always do all of the work
necessary to find the start of the next hole/data. Much of this work may
be repeated if we don't use this information fully, but query with an
only slightly larger offset in the next loop iteration. Therefore, if
bdrv_block_status() is called in a loop, it should always pass the
full number of bytes that the whole loop is interested in.

This removes the arbitrary limit and speeds up 'qemu-img map'
significantly on heavily fragmented images.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20200707144629.51235-1-kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
-rw-r--r--qemu-img.c5
1 files changed, 1 insertions, 4 deletions
diff --git a/qemu-img.c b/qemu-img.c
index 498fbf42fe..4548dbff82 100644
--- a/qemu-img.c
+++ b/qemu-img.c
@@ -3210,12 +3210,9 @@ static int img_map(int argc, char **argv)
     curr.start = start_offset;
     while (curr.start + curr.length < length) {
         int64_t offset = curr.start + curr.length;
-        int64_t n;
+        int64_t n = length - offset;
 
-        /* Probe up to 1 GiB at a time.  */
-        n = MIN(1 * GiB, length - offset);
         ret = get_block_status(bs, offset, n, &next);
-
         if (ret < 0) {
             error_report("Could not read file metadata: %s", strerror(-ret));
             goto out;