1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
|
user-level: 0.618
graphic: 0.495
performance: 0.492
permissions: 0.480
PID: 0.455
arm: 0.441
device: 0.421
network: 0.396
register: 0.395
architecture: 0.382
hypervisor: 0.364
risc-v: 0.362
socket: 0.357
peripherals: 0.357
debug: 0.344
boot: 0.339
kernel: 0.321
semantic: 0.313
files: 0.302
vnc: 0.300
mistranslation: 0.293
virtual: 0.285
ppc: 0.281
x86: 0.269
TCG: 0.266
KVM: 0.255
VMM: 0.239
i386: 0.210
assembly: 0.189
[Feature request] acceptance test class to run user-mode binaries
Currently the acceptance test framework only target system-mode emulation.
It would be useful to test user-mode too.
Ref: https://<email address hidden>/msg626610.html
What user-mode testing do you think might be improved by using avocado?
IMO at present we have a fairly comprehensive testing infrastructure for user-mode that is simply underused. With docker, we have a set of cross-compilers for most guest architectures, and we are able to build statically linked binaries that are copied out of the container for testing by the just-built qemu binaries on the host. This infrastructure is used by check-tcg. It's fairly easy to add new test cases to be run on one or all guests.
On 4/24/20 9:14 PM, Richard Henderson wrote:
> What user-mode testing do you think might be improved by using avocado?
Test unmodified real-world binaries, know to work in the field.
Test can be added by users without having to be a TCG developer, see
https://<email address hidden>/msg626608.html:
class LoadBFLT(LinuxUserTest):
def test_stm32(self):
rootfs_url = ('https://elinux.org/images/5/51/'
'Stm32_mini_rootfs.cpio.bz2')
rootfs_path_bz2 = self.fetch_asset(rootfs_url, ...)
busybox_path = self.workdir + "/bin/busybox"
res = self.run("%s %s" % (busybox_path, cmd))
ver = 'BusyBox v1.24.0.git (2015-02-03 22:17:13 CET) ...'
self.assertIn(ver, res.stdout_text)
cmd = 'uname -a'
res = self.run("%s %s" % (busybox_path, cmd))
unm = 'armv7l GNU/Linux'
self.assertIn(unm, res.stdout_text)
This is a fairly trivial test, cheap (no need to cross-build), yet it
still covers quite some QEMU code.
> IMO at present we have a fairly comprehensive testing infrastructure for
> user-mode that is simply underused. With docker, we have a set of
> cross-compilers for most guest architectures, and we are able to build
> statically linked binaries that are copied out of the container for
> testing by the just-built qemu binaries on the host. This
> infrastructure is used by check-tcg. It's fairly easy to add new test
> cases to be run on one or all guests.
What you describe is a different and complementary test set. Craft tests
and build them with QEMU.
The QEMU project is currently considering to move its bug tracking to another system. For this we need to know which bugs are still valid and which could be closed already. Thus we are setting older bugs to "Incomplete" now.
If you still think this bug report here is valid, then please switch the state back to "New" within the next 60 days, otherwise this report will be marked as "Expired". Or mark it as "Fix Released" if the problem has been solved with a newer version of QEMU already. Thank you and sorry for the inconvenience.
This is an automated cleanup. This bug report has been moved to QEMU's
new bug tracker on gitlab.com and thus gets marked as 'expired' now.
Please continue with the discussion here:
https://gitlab.com/qemu-project/qemu/-/issues/82
|