performance regression in qemu-user + proot To reproduce: 1. Install qemu-user-static and proot 2. Enter some arm chroot using them: proot -0 -q qemu-arm-static -w / -r chroot/ /bin/bash 3. Run a command which normally takes a short but measurable amount of time: cd /usr/share/doc && time grep -R hello Result: This is over 100 times slower on 18.04 than it was on 16.04. I am not sure if proot or qemu is causing the problem, but proot version has not changed. Also note that on 16.04 I am using the cloud repo version of qemu, which is newer than 16.04 stock, but still older than 18.04. Although system 2 is lower spec than system 1, it should not be this much slower. No other software seems to be affected. If required I can supply a chroot tarball. It is essentially just a Debian bootstrap though. Logs: System 1: i7-6700 3.4GHz, 32GB RAM, 512GB Crucial MX100 SSD, Ubuntu 16.04 qemu-arm version 2.10.1(Debian 1:2.10+dfsg-0ubuntu3.4~cloud0) proot 5.1.0 al@al-desktop:~/rpi-ramdisk/raspbian$ proot -0 -q qemu-arm-static -w / -r root/ /bin/bash root@al-desktop:/# cd /usr/share/doc root@al-desktop:/usr/share/doc# time grep -R hello dash/copyright:Debian GNU/Linux hello source package as the file COPYING. If not, real 0m0.066s user 0m0.040s sys 0m0.008s System 2: i5-5300U 2.30GHz, 8GB RAM, 256GB Crucial MX300 SSD, Ubuntu 18.04 qemu-arm version 2.11.1(Debian 1:2.11+dfsg-1ubuntu4) proot 5.1.0 al@al-thinkpad:~/rpi-ramdisk/raspbian$ proot -0 -q qemu-arm-static -w / -r root/ /bin/bash root@al-thinkpad:/# cd /usr/share/doc root@al-thinkpad:/usr/share/doc# time grep -R hello dash/copyright:Debian GNU/Linux hello source package as the file COPYING. If not, real 0m24.176s user 0m0.366s sys 0m11.352s ProblemType: Bug DistroRelease: Ubuntu 18.04 Package: qemu (not installed) ProcVersionSignature: Ubuntu 4.15.0-12.13-generic 4.15.7 Uname: Linux 4.15.0-12-generic x86_64 ApportVersion: 2.20.8-0ubuntu10 Architecture: amd64 Date: Mon Mar 19 07:13:25 2018 InstallationDate: Installed on 2018-03-18 (0 days ago) InstallationMedia: Xubuntu 18.04 LTS "Bionic Beaver" - Alpha amd64 (20180318) SourcePackage: qemu UpgradeStatus: No upgrade log present (probably fresh install) Also, I noticed this while running a script that normally takes 12 minutes. After 12 hours I killed it. It never stopped advancing or threw any errors. It was just excruciatingly slow the whole time. That script builds chroots and can be found here: https://github.com/ali1234/rpi-ramdisk Hi Alistair, I've seen a few similar reports, but afaik it is an upstream issue - we have no custom ubuntu bits in that area applied. To confirm that and ask upstream about it I'D ask you if you have a way to build qemu from upstream and check which version there broke it? git bisect start # good: [ba87166e14ffd7299c35badc4c11f3fa3c129ec6] Update version for 2.10.2 release git bisect good ba87166e14ffd7299c35badc4c11f3fa3c129ec6 # bad: [7c1beb52ed86191d9e965444d934adaa2531710f] Update version for 2.11.1 release git bisect bad 7c1beb52ed86191d9e965444d934adaa2531710f # good: [1ab5eb4efb91a3d4569b0df6e824cc08ab4bd8ec] Update version for v2.10.0 release git bisect good 1ab5eb4efb91a3d4569b0df6e824cc08ab4bd8ec # good: [23ca459a455c83ffadb03ab1eedf0b6cff62bfeb] mirror: Switch mirror_dirty_init() to byte-based iteration git bisect good 23ca459a455c83ffadb03ab1eedf0b6cff62bfeb # bad: [8cbf74b23cafd1bcee5fdef769f8e94ace43935f] qcow2: Reduce is_zero() rounding git bisect bad 8cbf74b23cafd1bcee5fdef769f8e94ace43935f # good: [861cd431c99e56ddb5953ca1da164a9c32b477ca] Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-2.11-20171017' into staging git bisect good 861cd431c99e56ddb5953ca1da164a9c32b477ca # bad: [2bcf018340cbf233f7145e643fc1bb367f23fd90] s390x/tcg: low-address protection support git bisect bad 2bcf018340cbf233f7145e643fc1bb367f23fd90 # bad: [840e0691303f84f7837bc75b37595e9b4419f35d] Merge remote-tracking branch 'remotes/mcayland/tags/qemu-openbios-signed' into staging git bisect bad 840e0691303f84f7837bc75b37595e9b4419f35d # good: [3da023b5827543ee4c022986ea2ad9d1274410b2] scsi: reject configurations with logical block size > physical block size git bisect good 3da023b5827543ee4c022986ea2ad9d1274410b2 # good: [ba6f0fc25e3c14fbb36f4b5a616a89cd3f1de6d0] Merge remote-tracking branch 'remotes/kraxel/tags/opengl-20171017-pull-request' into staging git bisect good ba6f0fc25e3c14fbb36f4b5a616a89cd3f1de6d0 # bad: [f443e3960d9d3340dd286e5fc0b661bb165a8b22] linux-user: Fix TARGET_MTIOCTOP/MTIOCGET/MTIOCPOS values git bisect bad f443e3960d9d3340dd286e5fc0b661bb165a8b22 # good: [de258eb07db6cf893ef1bfad8c0cedc0b983db55] tcg: Fix off-by-one in assert in page_set_flags git bisect good de258eb07db6cf893ef1bfad8c0cedc0b983db55 # bad: [cc1b3960a1a54125d2c87719fa945179bffbae68] linux-user/sh4: Reduce TARGET_VIRT_ADDR_SPACE_BITS to 31 git bisect bad cc1b3960a1a54125d2c87719fa945179bffbae68 # bad: [18e80c55bb6ec17c05ec0ba717ec83933c2bfc07] linux-user: Tidy and enforce reserved_va initialization git bisect bad 18e80c55bb6ec17c05ec0ba717ec83933c2bfc07 # first bad commit: [18e80c55bb6ec17c05ec0ba717ec83933c2bfc07] linux-user: Tidy and enforce reserved_va initialization origin/master is also affected. This upstream bug may be related. It has a patch. https://bugs.launchpad.net/qemu/+bug/1740219 Thanks for the check Alistar, Lets add a Qemu (upstream) bug task so this one is mirrored to the ML. I'm not familiar with that area, but on the ML one can decide if it is a dup to https://bugs.launchpad.net/qemu/+bug/1740219 or not. I just tested the patch from https://bugs.launchpad.net/qemu/+bug/1740219 and it fixes the problem for me. Specifically I only tried the final patch of the series. Then lets join there and let your update give that thread some new life.