summary refs log tree commit diff stats
path: root/results/classifier/zero-shot/118/none/2839
blob: b5ba36d487425ea1d8f9635ea64f6dcc39197cef (plain) (blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
graphic: 0.661
PID: 0.641
performance: 0.635
vnc: 0.624
virtual: 0.598
KVM: 0.597
peripherals: 0.563
device: 0.553
debug: 0.502
register: 0.492
ppc: 0.491
permissions: 0.477
x86: 0.468
arm: 0.466
TCG: 0.465
semantic: 0.464
VMM: 0.458
hypervisor: 0.456
socket: 0.452
assembly: 0.450
network: 0.430
architecture: 0.424
files: 0.422
kernel: 0.414
boot: 0.406
risc-v: 0.386
user-level: 0.367
i386: 0.325
mistranslation: 0.312

Physical memory usage spikes after migration for a VM using memory-backend-memfd memory
Description of problem:
When starting a virtual machine using the memory-backend-memfd type memory, configuring the virtual machine memory to 256GB or any other size, the QEMU process initially allocates only a little over 4GB of physical memory. However, after migrating the virtual machine, the physical memory occupied by the QEMU process almost equals 256GB. In an overcommitted memory environment, the increase in physical memory usage by the virtual machine can lead to insufficient host memory, triggering Out-Of-Memory (OOM).
Steps to reproduce:
1. start vm
./qemu-system-x86_64  -accel kvm -cpu SandyBridge  -object memory-backend-memfd,id=mem1,size=256G -machine memory-backend=mem1  -smp 4  -drive file=/nvme0n1/luzhipeng/fusionos.qcow2,if=none,id=drive0,cache=none  -device virtio-blk,drive=drive0,bootindex=1  -monitor stdio -vnc :0
2. start vm on another host
./qemu-system-x86_64  -accel kvm -cpu SandyBridge  -object memory-backend-memfd,id=mem1,size=256G -machine memory-backend=mem1  -smp 4  -drive file=/nvme0n1/luzhipeng/fusionos.qcow2,if=none,id=drive0,cache=none  -device virtio-blk,drive=drive0,bootindex=1  -monitor stdio -vnc :0 -incoming tcp:0.0.0.0:4444
3. migrate vm
migrate -d tcp:xx.xx.xx.xx:4444
4.
Check QEMU process memory usage with the top command

```
top - 14:01:05 up 35 days, 20:16,  2 users,  load average: 0.22, 0.23, 0.18
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.2 us,  0.1 sy,  0.0 ni, 99.8 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
MiB Mem : 514595.3 total,   2642.6 free, 401703.3 used, 506435.3 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used. 112892.0 avail Mem

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
3865345 root      20   0  257.7g 256.1g 256.0g S   1.3  51.0   3:14.44 qemu-system-x86
```
Additional information:
```
The relevant code:
void ram_handle_zero(void *host, uint64_t size)
{
    if (!buffer_is_zero(host, size)) {
        memset(host, 0, size);
    }
}
```

In the memory migration process, for the migration of zero pages, the destination side calls buffer_is_zero to check whether the corresponding page is entirely zero. If it is not zero, it actively sets it as a full page. For memory of the memfd type, the first access will allocate physical memory, resulting in physical memory allocation for all zero pages of the virtual machine.