blob: 57c8cd2e5d4965f653d69a82b5728824ac5adf1c (
plain) (
blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
|
permissions: 0.983
ppc: 0.979
device: 0.978
peripherals: 0.976
graphic: 0.974
register: 0.973
performance: 0.972
kernel: 0.971
assembly: 0.970
PID: 0.969
boot: 0.969
semantic: 0.968
vnc: 0.968
arm: 0.967
VMM: 0.967
architecture: 0.965
files: 0.963
virtual: 0.961
TCG: 0.961
risc-v: 0.960
user-level: 0.953
socket: 0.951
hypervisor: 0.946
x86: 0.931
debug: 0.930
mistranslation: 0.920
KVM: 0.914
i386: 0.902
network: 0.785
"cannot set up guest memory" b/c no automatic clearing of Linux' cache
Version: qemu-2.6.0-1
Kernel: 4.4.13-1-MANJARO
Full script (shouldn't matter though): https://pastebin.com/Hp24PWNE
Problem:
When host has been up and used for a while cache has been filled as much that guest can't be started without droping caches.
Expected behavior:
Qemu should be able to request as much Memory as it needs and cause Linux to drop cache pages if needed. A user shouldn't be required to have to come to this conclusion and having to drop caches to start Qemu with the required amount of memory.
My fix:
Following command (as root) required before qemu start:
# sync && echo 3 > /proc/sys/vm/drop_caches
Example:
$ sudo qemu.sh -m 10240 && echo success || echo failed
qemu-system-x86_64: cannot set up guest memory 'pc.ram': Cannot allocate memory
failed
$ free
total used free shared buff/cache available
Mem: 16379476 9126884 3462688 148480 3789904 5123572
Swap: 0 0 0
$ sudo sh -c 'sync && echo 3 > /proc/sys/vm/drop_caches'
$ free
total used free shared buff/cache available
Mem: 16379476 1694528 14106552 149772 578396 14256428
Swap: 0 0 0
$ sudo qemu.sh -m 10240 && echo success || echo failed
success
Hi Celmor,
That shouldn't happen! QEMU's allocation of memory is pretty normal, so really the question is for the kernel guys.
Having chatted to a collague, two questions:
a) Does it still happen if you set /proc/sys/vm/overcommit_memory to 1?
b) What filesystem are you using (some have different behaviour when dealing with the caches).
@dgilbert-h / Dr. David Alan Gilbert
Thanks for your answer.
b)
Mounted/used block devices:
NAME MOUNTPOINT TYPE FSTYPE
sda disk crypto_LUKS
└─Data1 crypt zfs_member
├─sdb5 / part ext4
└─sdb6 /boot part vfat
sdd disk crypto_LUKS
└─Data2 crypt zfs_member
ZFS file system for extra space, also ZFS is the reason I'm not running the latest kernel...
a)
$ free
total used free shared buff/cache available
Mem: 16379476 7879216 1867196 188180 6633064 3587620
Swap: 0 0 0
$ qemu.sh -m 10240 && echo success || echo failed
qemu-system-x86_64: cannot set up guest memory 'pc.ram': Cannot allocate memory
failed
$ sudo sh -c 'echo 1 > /proc/sys/vm/overcommit_memory'
$ qemu.sh -m 10240 && echo success || echo failed
success
So setting /proc/sys/vm/overcommit_memory to 1 works, so I guess I'm gonna need to execute
sudo sh -c 'echo 1 > /proc/sys/vm/overcommit_memory'
at start of my qemu.sh script instead of the 'drop_caches' part.
I still think the kernel should do whatever function overcommit_memory is for automatically, bit it seams to be the fault of the kernel of my distribution then, thanks for your help.
Thanks Celmor; that might be one for the zfs guys then if that's what's holding onto the caches; I suspect their caching is a bit different from the rest of the Linux filesystems.
I'm going to close this as 'invalid' because I'm pretty sure this isn't a qemu bug; the kernel should be giving us the RAM we asked for if it's got it.
|