diff options
| author | Christian Krinitsin <mail@krinitsin.com> | 2025-06-30 12:24:58 +0000 |
|---|---|---|
| committer | Christian Krinitsin <mail@krinitsin.com> | 2025-06-30 12:27:06 +0000 |
| commit | 33606b41d35115f887ea688b1a16f2ff85bf2fe4 (patch) | |
| tree | 406b2c7b19a087ba437c68f3dbf0b589fa1d6150 /results/scraper/launchpad-without-comments/1701449 | |
| parent | adedf8771bc4de3113041ca21bd4d0d1c0014b6a (diff) | |
| download | emulator-bug-study-33606b41d35115f887ea688b1a16f2ff85bf2fe4.tar.gz emulator-bug-study-33606b41d35115f887ea688b1a16f2ff85bf2fe4.zip | |
add launchpad bug reports without comments
Diffstat (limited to 'results/scraper/launchpad-without-comments/1701449')
| -rw-r--r-- | results/scraper/launchpad-without-comments/1701449 | 62 |
1 files changed, 62 insertions, 0 deletions
diff --git a/results/scraper/launchpad-without-comments/1701449 b/results/scraper/launchpad-without-comments/1701449 new file mode 100644 index 00000000..60183a0d --- /dev/null +++ b/results/scraper/launchpad-without-comments/1701449 @@ -0,0 +1,62 @@ +high memory usage when using rbd with client caching + +Hi, +we are experiencing a quite high memory usage of a single qemu (used with KVM) process when using RBD with client caching as a disk backend. We are testing with 3GB memory qemu virtual machines and 128MB RBD client cache. When running 'fio' in the virtual machine you can see that after some time the machine uses a lot more memory (RSS) on the hypervisor than she should. We have seen values (in real production machines, no artificially fio tests) of 250% memory overhead. I reproduced this with qemu version 2.9 as well. + +Here the contents of our ceph.conf on the hypervisor: +""" +[client] +rbd cache writethrough until flush = False +rbd cache max dirty = 100663296 +rbd cache size = 134217728 +rbd cache target dirty = 50331648 +""" + +How to reproduce: +* create a virtual machine with a RBD backed disk (100GB or so) +* install a linux distribution on it (we are using Ubuntu) +* install fio (apt-get install fio) +* run fio multiple times with (e.g.) the following test file: +""" +# This job file tries to mimic the Intel IOMeter File Server Access Pattern +[global] +description=Emulation of Intel IOmeter File Server Access Pattern +randrepeat=0 +filename=/root/test.dat +# IOMeter defines the server loads as the following: +# iodepth=1 Linear +# iodepth=4 Very Light +# iodepth=8 Light +# iodepth=64 Moderate +# iodepth=256 Heavy +iodepth=8 +size=80g +direct=0 +ioengine=libaio + +[iometer] +stonewall +bs=4M +rw=randrw + +[iometer_just_write] +stonewall +bs=4M +rw=write + +[iometer_just_read] +stonewall +bs=4M +rw=read +""" + +You can measure the virtual machine RSS usage on the hypervisor with: + virsh dommemstat <machine name> | grep rss +or if you are not using libvirt: + grep RSS /proc/<PID of qemu process>/status + +When switching off the RBD client cache, all is ok again, as the process does not use so much memory anymore. + +There is already a ticket on the ceph bug tracker for this ([1]). However I can reproduce that memory behaviour only when using qemu (maybe it is using librbd in a special way?). Running directly 'fio' with the rbd engine does not result in that high memory usage. + +[1] http://tracker.ceph.com/issues/20054 \ No newline at end of file |