summary refs log tree commit diff stats
path: root/results/scraper/launchpad/1874674
blob: 2b2c8408d6f2c3f151d099986b47adb74b0b28bf (plain) (blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
[Feature request] acceptance test class to run user-mode binaries

Currently the acceptance test framework only target system-mode emulation.
It would be useful to test user-mode too.

Ref: https://<email address hidden>/msg626610.html

What user-mode testing do you think might be improved by using avocado?

IMO at present we have a fairly comprehensive testing infrastructure for user-mode that is simply underused.  With docker, we have a set of cross-compilers for most guest architectures, and we are able to build statically linked binaries that are copied out of the container for testing by the just-built qemu binaries on the host.  This infrastructure is used by check-tcg.  It's fairly easy to add new test cases to be run on one or all guests.

On 4/24/20 9:14 PM, Richard Henderson wrote:
> What user-mode testing do you think might be improved by using avocado?

Test unmodified real-world binaries, know to work in the field.

Test can be added by users without having to be a TCG developer, see
https://<email address hidden>/msg626608.html:

  class LoadBFLT(LinuxUserTest):
      def test_stm32(self):
          rootfs_url = ('https://elinux.org/images/5/51/'
                        'Stm32_mini_rootfs.cpio.bz2')
          rootfs_path_bz2 = self.fetch_asset(rootfs_url, ...)
          busybox_path = self.workdir + "/bin/busybox"

          res = self.run("%s %s" % (busybox_path, cmd))
          ver = 'BusyBox v1.24.0.git (2015-02-03 22:17:13 CET) ...'
          self.assertIn(ver, res.stdout_text)

          cmd = 'uname -a'
          res = self.run("%s %s" % (busybox_path, cmd))
          unm = 'armv7l GNU/Linux'
          self.assertIn(unm, res.stdout_text)

This is a fairly trivial test, cheap (no need to cross-build), yet it
still covers quite some QEMU code.

> IMO at present we have a fairly comprehensive testing infrastructure for
> user-mode that is simply underused.  With docker, we have a set of
> cross-compilers for most guest architectures, and we are able to build
> statically linked binaries that are copied out of the container for
> testing by the just-built qemu binaries on the host.  This
> infrastructure is used by check-tcg.  It's fairly easy to add new test
> cases to be run on one or all guests.

What you describe is a different and complementary test set. Craft tests
and build them with QEMU.


The QEMU project is currently considering to move its bug tracking to another system. For this we need to know which bugs are still valid and which could be closed already. Thus we are setting older bugs to "Incomplete" now.
If you still think this bug report here is valid, then please switch the state back to "New" within the next 60 days, otherwise this report will be marked as "Expired". Or mark it as "Fix Released" if the problem has been solved with a newer version of QEMU already. Thank you and sorry for the inconvenience.



This is an automated cleanup. This bug report has been moved to QEMU's
new bug tracker on gitlab.com and thus gets marked as 'expired' now.
Please continue with the discussion here:

 https://gitlab.com/qemu-project/qemu/-/issues/82