summary refs log tree commit diff stats
path: root/results/classifier/105/other/1494350
diff options
context:
space:
mode:
authorChristian Krinitsin <mail@krinitsin.com>2025-06-03 12:04:13 +0000
committerChristian Krinitsin <mail@krinitsin.com>2025-06-03 12:04:13 +0000
commit256709d2eb3fd80d768a99964be5caa61effa2a0 (patch)
tree05b2352fba70923126836a64b6a0de43902e976a /results/classifier/105/other/1494350
parent2ab14fa96a6c5484b5e4ba8337551bb8dcc79cc5 (diff)
downloadqemu-analysis-256709d2eb3fd80d768a99964be5caa61effa2a0.tar.gz
qemu-analysis-256709d2eb3fd80d768a99964be5caa61effa2a0.zip
add new classifier result
Diffstat (limited to 'results/classifier/105/other/1494350')
-rw-r--r--results/classifier/105/other/14943501047
1 files changed, 1047 insertions, 0 deletions
diff --git a/results/classifier/105/other/1494350 b/results/classifier/105/other/1494350
new file mode 100644
index 000000000..3f795c107
--- /dev/null
+++ b/results/classifier/105/other/1494350
@@ -0,0 +1,1047 @@
+other: 0.970
+graphic: 0.960
+mistranslation: 0.955
+semantic: 0.940
+instruction: 0.918
+device: 0.917
+assembly: 0.900
+vnc: 0.888
+network: 0.873
+socket: 0.862
+boot: 0.835
+KVM: 0.798
+
+QEMU: causes vCPU steal time overflow on live migration
+
+I'm pasting in text from Debian Bug 785557
+https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=785557
+b/c I couldn't find this issue reported.
+
+It is present in QEMU 2.3, but I haven't tested later versions.  Perhaps someone else will find this bug and confirm for later versions.  (Or I will when I have time!)
+
+--------------------------------------------------------------------------------------------
+
+Hi,
+
+I'm trying to debug an issue we're having with some debian.org machines 
+running in QEMU 2.1.2 instances (see [1] for more background). In short, 
+after a live migration guests running Debian Jessie (linux 3.16) stop 
+accounting CPU time properly. /proc/stat in the guest shows no increase 
+in user and system time anymore (regardless of workload) and what stands 
+out are extremely large values for steal time:
+
+ % cat /proc/stat
+ cpu  2400 0 1842 650879168 2579640 0 25 136562317270 0 0
+ cpu0 1366 0 1028 161392988 1238598 0 11 383803090749 0 0
+ cpu1 294 0 240 162582008 639105 0 8 39686436048 0 0
+ cpu2 406 0 338 163331066 383867 0 4 333994238765 0 0
+ cpu3 332 0 235 163573105 318069 0 1 1223752959076 0 0
+ intr 355773871 33 10 0 0 0 0 3 0 1 0 0 36 144 0 0 1638612 0 0 0 0 0 0 0 0 0 0 
+0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 5001741 41 0 8516993 0 3669582 0 0 0 0 0 0 0 0 0 
+0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
+0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
+0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
+0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
+0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
+0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
+0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
+0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
+0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
+0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
+0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
+0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
+0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
+0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
+0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
+0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
+0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
+ ctxt 837862829
+ btime 1431642967
+ processes 8529939
+ procs_running 1
+ procs_blocked 0
+ softirq 225193331 2 77532878 172 7250024 819289 0 54 33739135 176552 105675225
+ 
+Reading the memory pointed to by the steal time MSRs pre- and 
+post-migration, I can see that post-migration the high bytes are set to 
+0xff:
+
+(qemu) xp /8b 0x1fc0cfc0
+000000001fc0cfc0: 0x94 0x57 0x77 0xf5 0xff 0xff 0xff 0xff
+
+The "jump" in steal time happens when the guest is resumed on the 
+receiving side.
+
+I've also been able to consistently reproduce this on a Ganeti cluster 
+at work, using QEMU 2.1.3 and kernels 3.16 and 4.0 in the guests. The 
+issue goes away if I disable the steal time MSR using `-cpu 
+qemu64,-kvm_steal_time`.
+
+So, it looks to me as if the steal time MSR is not set/copied properly 
+during live migration, although AFAICT this should be the case after 
+917367aa968fd4fef29d340e0c7ec8c608dffaab.
+
+After investigating a bit more, it looks like the issue comes from an overflow
+in the kernel's accumulate_steal_time() (arch/x86/kvm/x86.c:2023):
+
+  static void accumulate_steal_time(struct kvm_vcpu *vcpu)
+  {
+          u64 delta;
+  
+          if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED))
+                  return;
+  
+          delta = current->sched_info.run_delay - vcpu->arch.st.last_steal;
+  
+Using systemtap with the attached script to trace KVM execution on the 
+receiving host kernel, we can see that shortly before marking the vCPUs 
+as runnable on a migrated KVM instance with 2 vCPUs, the following 
+happens (** marks lines of interest):
+
+ **  0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns
+     0 qemu-system-x86(18446): -> kvm_arch_vcpu_load
+     0 vhost-18446(18447): -> kvm_arch_vcpu_should_kick
+     5 vhost-18446(18447): <- kvm_arch_vcpu_should_kick
+    23 qemu-system-x86(18446): <- kvm_arch_vcpu_load
+     0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl
+     2 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl
+     0 qemu-system-x86(18446): -> kvm_arch_vcpu_put
+     2 qemu-system-x86(18446):  -> kvm_put_guest_fpu
+     3 qemu-system-x86(18446):  <- kvm_put_guest_fpu
+     4 qemu-system-x86(18446): <- kvm_arch_vcpu_put
+ **  0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns
+     0 qemu-system-x86(18446): -> kvm_arch_vcpu_load
+     1 qemu-system-x86(18446): <- kvm_arch_vcpu_load
+     0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl
+     1 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl
+     0 qemu-system-x86(18446): -> kvm_arch_vcpu_put
+     1 qemu-system-x86(18446):  -> kvm_put_guest_fpu
+     2 qemu-system-x86(18446):  <- kvm_put_guest_fpu
+     3 qemu-system-x86(18446): <- kvm_arch_vcpu_put
+ **  0 qemu-system-x86(18449): kvm_arch_vcpu_load: run_delay=40304 ns steal=7856949 ns
+     0 qemu-system-x86(18449): -> kvm_arch_vcpu_load
+ **  7 qemu-system-x86(18449): delta: 18446744073701734971 ns, steal=7856949 ns, run_delay=40304 ns
+    10 qemu-system-x86(18449): <- kvm_arch_vcpu_load
+ **  0 qemu-system-x86(18449): -> kvm_arch_vcpu_ioctl_run
+     4 qemu-system-x86(18449):  -> kvm_arch_vcpu_runnable
+     6 qemu-system-x86(18449):  <- kvm_arch_vcpu_runnable
+     ...
+     0 qemu-system-x86(18448): kvm_arch_vcpu_load: run_delay=0 ns steal=7856949 ns
+     0 qemu-system-x86(18448): -> kvm_arch_vcpu_load
+ ** 34 qemu-system-x86(18448): delta: 18446744073701694667 ns, steal=7856949 ns, run_delay=0 ns
+    40 qemu-system-x86(18448): <- kvm_arch_vcpu_load
+ **  0 qemu-system-x86(18448): -> kvm_arch_vcpu_ioctl_run
+     5 qemu-system-x86(18448):  -> kvm_arch_vcpu_runnable
+
+Now, what's really interesting is that current->sched_info.run_delay 
+gets reset because the tasks (threads) using the vCPUs change, and thus 
+have a different current->sched_info: it looks like task 18446 created 
+the two vCPUs, and then they were handed over to 18448 and 18449 
+respectively. This is also verified by the fact that during the 
+overflow, both vCPUs have the old steal time of the last vcpu_load of 
+task 18446. However, according to Documentation/virtual/kvm/api.txt:
+
+ - vcpu ioctls: These query and set attributes that control the operation
+   of a single virtual cpu.
+
+   Only run vcpu ioctls from the same thread that was used to create the vcpu.
+
+
+ 
+So it seems qemu is doing something that it shouldn't: calling vCPU 
+ioctls from a thread that didn't create the vCPU. Note that this 
+probably happens on every QEMU startup, but is not visible because the 
+guest kernel zeroes out the steal time on boot.
+
+There are at least two ways to mitigate the issue without a kernel
+recompilation:
+
+ - The first one is to disable the steal time propagation from host to 
+   guest by invoking qemu with `-cpu qemu64,-kvm_steal_time`. This will 
+   short-circuit accumulate_steal_time() due to (vcpu->arch.st.msr_val & 
+   KVM_MSR_ENABLED) and will completely disable steal time reporting in 
+   the guest, which may not be desired if people rely on it to detect 
+   CPU congestion.
+
+ - The other one is using the following systemtap script to prevent the 
+   steal time counter from overflowing by dropping the problematic 
+   samples (WARNING: systemtap guru mode required, use at your own 
+   risk):
+
+      probe module("kvm").statement("*@arch/x86/kvm/x86.c:2024") {
+        if (@defined($delta) && $delta < 0) {
+          printk(4, "kvm: steal time delta < 0, dropping")
+          $delta = 0
+        }
+      }
+
+Note that not all *guests* handle this condition in the same way: 3.2 
+guests still get the overflow in /proc/stat, but their scheduler 
+continues to work as expected. 3.16 guests OTOH go nuts once steal time 
+overflows and stop accumulating system & user time, while entering an 
+erratic state where steal time in /proc/stat is *decreasing* on every 
+clock tick.
+-------------------------------------------- Revised statement:
+> Now, what's really interesting is that current->sched_info.run_delay 
+> gets reset because the tasks (threads) using the vCPUs change, and 
+> thus have a different current->sched_info: it looks like task 18446 
+> created the two vCPUs, and then they were handed over to 18448 and 
+> 18449 respectively. This is also verified by the fact that during the 
+> overflow, both vCPUs have the old steal time of the last vcpu_load of 
+> task 18446. However, according to Documentation/virtual/kvm/api.txt:
+
+The above is not entirely accurate: the vCPUs were created by the 
+threads that are used to run them (18448 and 18449 respectively), it's 
+just that the main thread is issuing ioctls during initialization, as 
+illustrated by the strace output on a different process:
+
+ [ vCPU #0 thread creating vCPU #0 (fd 20) ]
+ [pid  1861] ioctl(14, KVM_CREATE_VCPU, 0) = 20
+ [pid  1861] ioctl(20, KVM_X86_SETUP_MCE, 0x7fbd3ca40cd8) = 0
+ [pid  1861] ioctl(20, KVM_SET_CPUID2, 0x7fbd3ca40ce0) = 0
+ [pid  1861] ioctl(20, KVM_SET_SIGNAL_MASK, 0x7fbd380008f0) = 0
+ 
+ [ vCPU #1 thread creating vCPU #1 (fd 21) ]
+ [pid  1862] ioctl(14, KVM_CREATE_VCPU, 0x1) = 21
+ [pid  1862] ioctl(21, KVM_X86_SETUP_MCE, 0x7fbd37ffdcd8) = 0
+ [pid  1862] ioctl(21, KVM_SET_CPUID2, 0x7fbd37ffdce0) = 0
+ [pid  1862] ioctl(21, KVM_SET_SIGNAL_MASK, 0x7fbd300008f0) = 0
+ 
+ [ Main thread calling kvm_arch_put_registers() on vCPU #0 ]
+ [pid  1859] ioctl(20, KVM_SET_REGS, 0x7ffc98aac230) = 0
+ [pid  1859] ioctl(20, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd38001000) = 0
+ [pid  1859] ioctl(20, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0
+ [pid  1859] ioctl(20, KVM_SET_SREGS, 0x7ffc98aac050) = 0
+ [pid  1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aab820) = 87
+ [pid  1859] ioctl(20, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0
+ [pid  1859] ioctl(20, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0
+ [pid  1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1
+ [pid  1859] ioctl(20, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0
+ [pid  1859] ioctl(20, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0
+ 
+ [ Main thread calling kvm_arch_put_registers() on vCPU #1 ]
+ [pid  1859] ioctl(21, KVM_SET_REGS, 0x7ffc98aac230) = 0
+ [pid  1859] ioctl(21, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd30001000) = 0
+ [pid  1859] ioctl(21, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0
+ [pid  1859] ioctl(21, KVM_SET_SREGS, 0x7ffc98aac050) = 0
+ [pid  1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aab820) = 87
+ [pid  1859] ioctl(21, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0
+ [pid  1859] ioctl(21, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0
+ [pid  1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1
+ [pid  1859] ioctl(21, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0
+ [pid  1859] ioctl(21, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0
+ 
+Using systemtap again, I noticed that the main thread's run_delay is copied to
+last_steal only after a KVM_SET_MSRS ioctl which enables the steal time 
+MSR is issued by the main thread (see linux 
+3.16.7-ckt11-1/arch/x86/kvm/x86.c:2162). Taking an educated guess, I 
+reverted the following qemu commits:
+
+ commit 0e5035776df31380a44a1a851850d110b551ecb6
+ Author: Marcelo Tosatti <email address hidden>
+ Date:   Tue Sep 3 18:55:16 2013 -0300
+ 
+     fix steal time MSR vmsd callback to proper opaque type
+     
+     Convert steal time MSR vmsd callback pointer to proper X86CPU type.
+     
+     Signed-off-by: Marcelo Tosatti <email address hidden>
+     Signed-off-by: Paolo Bonzini <email address hidden>
+ 
+ commit 917367aa968fd4fef29d340e0c7ec8c608dffaab
+ Author: Marcelo Tosatti <email address hidden>
+ Date:   Tue Feb 19 23:27:20 2013 -0300
+ 
+     target-i386: kvm: save/restore steal time MSR
+     
+     Read and write steal time MSR, so that reporting is functional across
+     migration.
+     
+     Signed-off-by: Marcelo Tosatti <email address hidden>
+     Signed-off-by: Gleb Natapov <email address hidden>
+ 
+and the steal time jump on migration went away. However, steal time was 
+not reported at all after migration, which is expected after reverting 
+917367aa.
+
+So it seems that after 917367aa, the steal time MSR is correctly saved 
+and copied to the receiving side, but then it is restored by the main 
+thread (probably during cpu_synchronize_all_post_init()), causing the 
+overflow when the vCPU threads are unpaused.
+
+Hi,
+
+I confirm this bug,
+
+I have seen this a lot of time with debian jessie (kernel 3.16) and ubuntu (kernel 4.X) with qemu 2.2 and qemu 2.3
+
+Hi,
+
+Same issue here: gentoo kernel 3.18 and 4.0, qemu 2.2, 2.3 and 2.4
+
+Hi,
+
+I've seen the same issue with debian jessie.
+
+Compiled 4.2.3 from kernel.org with "make localyesconfig",
+no problem any more
+
+
+>>Hi,
+Hi
+
+>>
+>>I've seen the same issue with debian jessie.
+>>
+>>Compiled 4.2.3 from kernel.org with "make localyesconfig",
+>>no problem any more
+
+host kernel or guest kernel ?
+
+
+
+----- Mail original -----
+De: "Markus Breitegger" <email address hidden>
+À: "qemu-devel" <email address hidden>
+Envoyé: Jeudi 15 Octobre 2015 12:16:07
+Objet: [Qemu-devel] [Bug 1494350] Re: QEMU: causes vCPU steal time overflow on live migration
+
+Hi, 
+
+I've seen the same issue with debian jessie. 
+
+Compiled 4.2.3 from kernel.org with "make localyesconfig", 
+no problem any more 
+
+-- 
+You received this bug notification because you are a member of qemu- 
+devel-ml, which is subscribed to QEMU. 
+https://bugs.launchpad.net/bugs/1494350 
+
+Title: 
+QEMU: causes vCPU steal time overflow on live migration 
+
+Status in QEMU: 
+New 
+
+Bug description: 
+I'm pasting in text from Debian Bug 785557 
+https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=785557 
+b/c I couldn't find this issue reported. 
+
+It is present in QEMU 2.3, but I haven't tested later versions. 
+Perhaps someone else will find this bug and confirm for later 
+versions. (Or I will when I have time!) 
+
+-------------------------------------------------------------------------------------------- 
+
+Hi, 
+
+I'm trying to debug an issue we're having with some debian.org machines 
+running in QEMU 2.1.2 instances (see [1] for more background). In short, 
+after a live migration guests running Debian Jessie (linux 3.16) stop 
+accounting CPU time properly. /proc/stat in the guest shows no increase 
+in user and system time anymore (regardless of workload) and what stands 
+out are extremely large values for steal time: 
+
+% cat /proc/stat 
+cpu 2400 0 1842 650879168 2579640 0 25 136562317270 0 0 
+cpu0 1366 0 1028 161392988 1238598 0 11 383803090749 0 0 
+cpu1 294 0 240 162582008 639105 0 8 39686436048 0 0 
+cpu2 406 0 338 163331066 383867 0 4 333994238765 0 0 
+cpu3 332 0 235 163573105 318069 0 1 1223752959076 0 0 
+intr 355773871 33 10 0 0 0 0 3 0 1 0 0 36 144 0 0 1638612 0 0 0 0 0 0 0 0 0 0 
+0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 5001741 41 0 8516993 0 3669582 0 0 0 0 0 0 0 0 0 
+0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
+0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
+0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
+0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
+0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
+0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
+0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
+0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
+0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
+0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
+0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
+0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
+0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
+0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
+0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
+0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
+0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 
+ctxt 837862829 
+btime 1431642967 
+processes 8529939 
+procs_running 1 
+procs_blocked 0 
+softirq 225193331 2 77532878 172 7250024 819289 0 54 33739135 176552 105675225 
+
+Reading the memory pointed to by the steal time MSRs pre- and 
+post-migration, I can see that post-migration the high bytes are set to 
+0xff: 
+
+(qemu) xp /8b 0x1fc0cfc0 
+000000001fc0cfc0: 0x94 0x57 0x77 0xf5 0xff 0xff 0xff 0xff 
+
+The "jump" in steal time happens when the guest is resumed on the 
+receiving side. 
+
+I've also been able to consistently reproduce this on a Ganeti cluster 
+at work, using QEMU 2.1.3 and kernels 3.16 and 4.0 in the guests. The 
+issue goes away if I disable the steal time MSR using `-cpu 
+qemu64,-kvm_steal_time`. 
+
+So, it looks to me as if the steal time MSR is not set/copied properly 
+during live migration, although AFAICT this should be the case after 
+917367aa968fd4fef29d340e0c7ec8c608dffaab. 
+
+After investigating a bit more, it looks like the issue comes from an overflow 
+in the kernel's accumulate_steal_time() (arch/x86/kvm/x86.c:2023): 
+
+static void accumulate_steal_time(struct kvm_vcpu *vcpu) 
+{ 
+u64 delta; 
+
+if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED)) 
+return; 
+
+delta = current->sched_info.run_delay - vcpu->arch.st.last_steal; 
+
+Using systemtap with the attached script to trace KVM execution on the 
+receiving host kernel, we can see that shortly before marking the vCPUs 
+as runnable on a migrated KVM instance with 2 vCPUs, the following 
+happens (** marks lines of interest): 
+
+** 0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns 
+0 qemu-system-x86(18446): -> kvm_arch_vcpu_load 
+0 vhost-18446(18447): -> kvm_arch_vcpu_should_kick 
+5 vhost-18446(18447): <- kvm_arch_vcpu_should_kick 
+23 qemu-system-x86(18446): <- kvm_arch_vcpu_load 
+0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl 
+2 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl 
+0 qemu-system-x86(18446): -> kvm_arch_vcpu_put 
+2 qemu-system-x86(18446): -> kvm_put_guest_fpu 
+3 qemu-system-x86(18446): <- kvm_put_guest_fpu 
+4 qemu-system-x86(18446): <- kvm_arch_vcpu_put 
+** 0 qemu-system-x86(18446): kvm_arch_vcpu_load: run_delay=7856949 ns steal=7856949 ns 
+0 qemu-system-x86(18446): -> kvm_arch_vcpu_load 
+1 qemu-system-x86(18446): <- kvm_arch_vcpu_load 
+0 qemu-system-x86(18446): -> kvm_arch_vcpu_ioctl 
+1 qemu-system-x86(18446): <- kvm_arch_vcpu_ioctl 
+0 qemu-system-x86(18446): -> kvm_arch_vcpu_put 
+1 qemu-system-x86(18446): -> kvm_put_guest_fpu 
+2 qemu-system-x86(18446): <- kvm_put_guest_fpu 
+3 qemu-system-x86(18446): <- kvm_arch_vcpu_put 
+** 0 qemu-system-x86(18449): kvm_arch_vcpu_load: run_delay=40304 ns steal=7856949 ns 
+0 qemu-system-x86(18449): -> kvm_arch_vcpu_load 
+** 7 qemu-system-x86(18449): delta: 18446744073701734971 ns, steal=7856949 ns, run_delay=40304 ns 
+10 qemu-system-x86(18449): <- kvm_arch_vcpu_load 
+** 0 qemu-system-x86(18449): -> kvm_arch_vcpu_ioctl_run 
+4 qemu-system-x86(18449): -> kvm_arch_vcpu_runnable 
+6 qemu-system-x86(18449): <- kvm_arch_vcpu_runnable 
+... 
+0 qemu-system-x86(18448): kvm_arch_vcpu_load: run_delay=0 ns steal=7856949 ns 
+0 qemu-system-x86(18448): -> kvm_arch_vcpu_load 
+** 34 qemu-system-x86(18448): delta: 18446744073701694667 ns, steal=7856949 ns, run_delay=0 ns 
+40 qemu-system-x86(18448): <- kvm_arch_vcpu_load 
+** 0 qemu-system-x86(18448): -> kvm_arch_vcpu_ioctl_run 
+5 qemu-system-x86(18448): -> kvm_arch_vcpu_runnable 
+
+Now, what's really interesting is that current->sched_info.run_delay 
+gets reset because the tasks (threads) using the vCPUs change, and thus 
+have a different current->sched_info: it looks like task 18446 created 
+the two vCPUs, and then they were handed over to 18448 and 18449 
+respectively. This is also verified by the fact that during the 
+overflow, both vCPUs have the old steal time of the last vcpu_load of 
+task 18446. However, according to Documentation/virtual/kvm/api.txt: 
+
+- vcpu ioctls: These query and set attributes that control the operation 
+of a single virtual cpu. 
+
+Only run vcpu ioctls from the same thread that was used to create 
+the vcpu. 
+
+
+
+So it seems qemu is doing something that it shouldn't: calling vCPU 
+ioctls from a thread that didn't create the vCPU. Note that this 
+probably happens on every QEMU startup, but is not visible because the 
+guest kernel zeroes out the steal time on boot. 
+
+There are at least two ways to mitigate the issue without a kernel 
+recompilation: 
+
+- The first one is to disable the steal time propagation from host to 
+guest by invoking qemu with `-cpu qemu64,-kvm_steal_time`. This will 
+short-circuit accumulate_steal_time() due to (vcpu->arch.st.msr_val & 
+KVM_MSR_ENABLED) and will completely disable steal time reporting in 
+the guest, which may not be desired if people rely on it to detect 
+CPU congestion. 
+
+- The other one is using the following systemtap script to prevent the 
+steal time counter from overflowing by dropping the problematic 
+samples (WARNING: systemtap guru mode required, use at your own 
+risk): 
+
+probe module("kvm").statement("*@arch/x86/kvm/x86.c:2024") { 
+if (@defined($delta) && $delta < 0) { 
+printk(4, "kvm: steal time delta < 0, dropping") 
+$delta = 0 
+} 
+} 
+
+Note that not all *guests* handle this condition in the same way: 3.2 
+guests still get the overflow in /proc/stat, but their scheduler 
+continues to work as expected. 3.16 guests OTOH go nuts once steal time 
+overflows and stop accumulating system & user time, while entering an 
+erratic state where steal time in /proc/stat is *decreasing* on every 
+clock tick. 
+-------------------------------------------- Revised statement: 
+> Now, what's really interesting is that current->sched_info.run_delay 
+> gets reset because the tasks (threads) using the vCPUs change, and 
+> thus have a different current->sched_info: it looks like task 18446 
+> created the two vCPUs, and then they were handed over to 18448 and 
+> 18449 respectively. This is also verified by the fact that during the 
+> overflow, both vCPUs have the old steal time of the last vcpu_load of 
+> task 18446. However, according to Documentation/virtual/kvm/api.txt: 
+
+The above is not entirely accurate: the vCPUs were created by the 
+threads that are used to run them (18448 and 18449 respectively), it's 
+just that the main thread is issuing ioctls during initialization, as 
+illustrated by the strace output on a different process: 
+
+[ vCPU #0 thread creating vCPU #0 (fd 20) ] 
+[pid 1861] ioctl(14, KVM_CREATE_VCPU, 0) = 20 
+[pid 1861] ioctl(20, KVM_X86_SETUP_MCE, 0x7fbd3ca40cd8) = 0 
+[pid 1861] ioctl(20, KVM_SET_CPUID2, 0x7fbd3ca40ce0) = 0 
+[pid 1861] ioctl(20, KVM_SET_SIGNAL_MASK, 0x7fbd380008f0) = 0 
+
+[ vCPU #1 thread creating vCPU #1 (fd 21) ] 
+[pid 1862] ioctl(14, KVM_CREATE_VCPU, 0x1) = 21 
+[pid 1862] ioctl(21, KVM_X86_SETUP_MCE, 0x7fbd37ffdcd8) = 0 
+[pid 1862] ioctl(21, KVM_SET_CPUID2, 0x7fbd37ffdce0) = 0 
+[pid 1862] ioctl(21, KVM_SET_SIGNAL_MASK, 0x7fbd300008f0) = 0 
+
+[ Main thread calling kvm_arch_put_registers() on vCPU #0 ] 
+[pid 1859] ioctl(20, KVM_SET_REGS, 0x7ffc98aac230) = 0 
+[pid 1859] ioctl(20, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd38001000) = 0 
+[pid 1859] ioctl(20, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0 
+[pid 1859] ioctl(20, KVM_SET_SREGS, 0x7ffc98aac050) = 0 
+[pid 1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aab820) = 87 
+[pid 1859] ioctl(20, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0 
+[pid 1859] ioctl(20, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0 
+[pid 1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1 
+[pid 1859] ioctl(20, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0 
+[pid 1859] ioctl(20, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0 
+
+[ Main thread calling kvm_arch_put_registers() on vCPU #1 ] 
+[pid 1859] ioctl(21, KVM_SET_REGS, 0x7ffc98aac230) = 0 
+[pid 1859] ioctl(21, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd30001000) = 0 
+[pid 1859] ioctl(21, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS, 0x7ffc98aac010) = 0 
+[pid 1859] ioctl(21, KVM_SET_SREGS, 0x7ffc98aac050) = 0 
+[pid 1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aab820) = 87 
+[pid 1859] ioctl(21, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0 
+[pid 1859] ioctl(21, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0 
+[pid 1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1 
+[pid 1859] ioctl(21, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS, 0x7ffc98aac1b0) = 0 
+[pid 1859] ioctl(21, KVM_SET_DEBUGREGS or KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0 
+
+Using systemtap again, I noticed that the main thread's run_delay is copied to 
+last_steal only after a KVM_SET_MSRS ioctl which enables the steal time 
+MSR is issued by the main thread (see linux 
+3.16.7-ckt11-1/arch/x86/kvm/x86.c:2162). Taking an educated guess, I 
+reverted the following qemu commits: 
+
+commit 0e5035776df31380a44a1a851850d110b551ecb6 
+Author: Marcelo Tosatti <email address hidden> 
+Date: Tue Sep 3 18:55:16 2013 -0300 
+
+fix steal time MSR vmsd callback to proper opaque type 
+
+Convert steal time MSR vmsd callback pointer to proper X86CPU type. 
+
+Signed-off-by: Marcelo Tosatti <email address hidden> 
+Signed-off-by: Paolo Bonzini <email address hidden> 
+
+commit 917367aa968fd4fef29d340e0c7ec8c608dffaab 
+Author: Marcelo Tosatti <email address hidden> 
+Date: Tue Feb 19 23:27:20 2013 -0300 
+
+target-i386: kvm: save/restore steal time MSR 
+
+Read and write steal time MSR, so that reporting is functional across 
+migration. 
+
+Signed-off-by: Marcelo Tosatti <email address hidden> 
+Signed-off-by: Gleb Natapov <email address hidden> 
+
+and the steal time jump on migration went away. However, steal time was 
+not reported at all after migration, which is expected after reverting 
+917367aa. 
+
+So it seems that after 917367aa, the steal time MSR is correctly saved 
+and copied to the receiving side, but then it is restored by the main 
+thread (probably during cpu_synchronize_all_post_init()), causing the 
+overflow when the vCPU threads are unpaused. 
+
+To manage notifications about this bug go to: 
+https://bugs.launchpad.net/qemu/+bug/1494350/+subscriptions 
+
+
+
+Sorry but I think I've pasted my answer to the wrong bugreport. Sorry about that! I'm not using live migration.
+https://lists.debian.org/debian-kernel/2014/11/msg00093.html
+
+but seems something Debian related.
+
+Following scenario:
+
+Host Ubuntu 14.04
+Guest Debian Jessie
+
+Debian with 3.16.0-4-amd64 kernel has high cpu load on one of our webserver:
+
+output from top
+
+PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND 
+  18 root      20   0                       S  11,0       50:10.35 ksoftirqd/2                         
+   28 root      20   0                       S  11,0       49:45.90 ksoftirqd/4                         
+   13 root      20   0                       S  10,1       51:25.18 ksoftirqd/1                         
+   23 root      20   0                       S  10,1       55:42.26 ksoftirqd/3                         
+   33 root      20   0                       S   8,3       43:12.53 ksoftirqd/5                         
+    3 root      20   0                       S   7,4       43:19.93 ksoftirqd/0 
+
+With backports kernel 4.2.0-0.bpo.1-amd64 or 4.2.3 from kernel.org the cpu usage is back normal.
+
+I ran into this problem on multiple debian jessie kvm guest's. I think this is nothing qemu related. Sorry.
+
+See the kvm list post here; Marcelo has a fix:
+
+http://www.spinics.net/lists/kvm/msg122175.html
+
+
+
+Hello,
+   I read in the thread
+"
+Applied to kvm/queue.  Thanks Marcelo, and thanks David for the review.
+
+Paolo
+"
+But I cannot find where the patch enters the qemu git repo:
+http://git.qemu.org/?p=qemu.git&a=search&h=HEAD&st=author&s=Tosatti
+
+Is it not there yet?
+Thanks!
+C.
+
+Hi lickdragon,
+  That's because the fix turned out to be in the kernel's KVM code; I can see it in the 4.4-rc1 upstream kernel.
+
+
+> Hi lickdragon,
+>   That's because the fix turned out to be in the kernel's KVM code; I can
+> see it in the 4.4-rc1 upstream kernel.
+
+Thanks!  I found it.
+https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=7cae2bedcbd4680b155999655e49c27b9cf020fa
+
+>   > Now, what's really interesting is that current->sched_info.run_delay
+>   > gets reset because the tasks (threads) using the vCPUs change, and
+>   > thus have a different current->sched_info: it looks like task 18446
+>   > created the two vCPUs, and then they were handed over to 18448 and
+>   > 18449 respectively. This is also verified by the fact that during the
+>   > overflow, both vCPUs have the old steal time of the last vcpu_load of
+> 
+>   > task 18446. However, according to Documentation/virtual/kvm/api.txt:
+>   The above is not entirely accurate: the vCPUs were created by the
+>   threads that are used to run them (18448 and 18449 respectively), it's
+>   just that the main thread is issuing ioctls during initialization, as
+>   illustrated by the strace output on a different process:
+> 
+>    [ vCPU #0 thread creating vCPU #0 (fd 20) ]
+>    [pid  1861] ioctl(14, KVM_CREATE_VCPU, 0) = 20
+>    [pid  1861] ioctl(20, KVM_X86_SETUP_MCE, 0x7fbd3ca40cd8) = 0
+>    [pid  1861] ioctl(20, KVM_SET_CPUID2, 0x7fbd3ca40ce0) = 0
+>    [pid  1861] ioctl(20, KVM_SET_SIGNAL_MASK, 0x7fbd380008f0) = 0
+> 
+>    [ vCPU #1 thread creating vCPU #1 (fd 21) ]
+>    [pid  1862] ioctl(14, KVM_CREATE_VCPU, 0x1) = 21
+>    [pid  1862] ioctl(21, KVM_X86_SETUP_MCE, 0x7fbd37ffdcd8) = 0
+>    [pid  1862] ioctl(21, KVM_SET_CPUID2, 0x7fbd37ffdce0) = 0
+>    [pid  1862] ioctl(21, KVM_SET_SIGNAL_MASK, 0x7fbd300008f0) = 0
+> 
+>    [ Main thread calling kvm_arch_put_registers() on vCPU #0 ]
+>    [pid  1859] ioctl(20, KVM_SET_REGS, 0x7ffc98aac230) = 0
+>    [pid  1859] ioctl(20, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd38001000) =
+> 0 [pid  1859] ioctl(20, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS,
+> 0x7ffc98aac010) = 0 [pid  1859] ioctl(20, KVM_SET_SREGS, 0x7ffc98aac050) =
+> 0
+>    [pid  1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aab820) = 87
+>    [pid  1859] ioctl(20, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0
+>    [pid  1859] ioctl(20, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0
+>    [pid  1859] ioctl(20, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1
+>    [pid  1859] ioctl(20, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS,
+> 0x7ffc98aac1b0) = 0 [pid  1859] ioctl(20, KVM_SET_DEBUGREGS or
+> KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0
+> 
+>    [ Main thread calling kvm_arch_put_registers() on vCPU #1 ]
+>    [pid  1859] ioctl(21, KVM_SET_REGS, 0x7ffc98aac230) = 0
+>    [pid  1859] ioctl(21, KVM_SET_XSAVE or KVM_SIGNAL_MSI, 0x7fbd30001000) =
+> 0 [pid  1859] ioctl(21, KVM_PPC_ALLOCATE_HTAB or KVM_SET_XCRS,
+> 0x7ffc98aac010) = 0 [pid  1859] ioctl(21, KVM_SET_SREGS, 0x7ffc98aac050) =
+> 0
+>    [pid  1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aab820) = 87
+>    [pid  1859] ioctl(21, KVM_SET_MP_STATE, 0x7ffc98aac230) = 0
+>    [pid  1859] ioctl(21, KVM_SET_LAPIC, 0x7ffc98aabd80) = 0
+>    [pid  1859] ioctl(21, KVM_SET_MSRS, 0x7ffc98aac1b0) = 1
+>    [pid  1859] ioctl(21, KVM_SET_PIT2 or KVM_SET_VCPU_EVENTS,
+> 0x7ffc98aac1b0) = 0 [pid  1859] ioctl(21, KVM_SET_DEBUGREGS or
+> KVM_SET_TSC_KHZ, 0x7ffc98aac1b0) = 0
+> 
+>   Using systemtap again, I noticed that the main thread's run_delay is
+> copied to last_steal only after a KVM_SET_MSRS ioctl which enables the
+> steal time MSR is issued by the main thread (see linux
+>   3.16.7-ckt11-1/arch/x86/kvm/x86.c:2162). Taking an educated guess, I
+>   reverted the following qemu commits:
+> 
+>    commit 0e5035776df31380a44a1a851850d110b551ecb6
+>    Author: Marcelo Tosatti <email address hidden>
+>    Date:   Tue Sep 3 18:55:16 2013 -0300
+> 
+>        fix steal time MSR vmsd callback to proper opaque type
+> 
+>        Convert steal time MSR vmsd callback pointer to proper X86CPU type.
+> 
+>        Signed-off-by: Marcelo Tosatti <email address hidden>
+>        Signed-off-by: Paolo Bonzini <email address hidden>
+> 
+>    commit 917367aa968fd4fef29d340e0c7ec8c608dffaab
+>    Author: Marcelo Tosatti <email address hidden>
+>    Date:   Tue Feb 19 23:27:20 2013 -0300
+> 
+>        target-i386: kvm: save/restore steal time MSR
+> 
+>        Read and write steal time MSR, so that reporting is functional across
+> migration.
+> 
+>        Signed-off-by: Marcelo Tosatti <email address hidden>
+>        Signed-off-by: Gleb Natapov <email address hidden>
+> 
+>   and the steal time jump on migration went away. However, steal time was
+>   not reported at all after migration, which is expected after reverting
+>   917367aa.
+> 
+>   So it seems that after 917367aa, the steal time MSR is correctly saved
+>   and copied to the receiving side, but then it is restored by the main
+>   thread (probably during cpu_synchronize_all_post_init()), causing the
+>   overflow when the vCPU threads are unpaused.
+> 
+> To manage notifications about this bug go to:
+> https://bugs.launchpad.net/qemu/+bug/1494350/+subscriptions
+
+
+
+To clarify, the 4.4 kernel needs to be running on the VM host, not the guests?
+
+Thanks again!
+
+
+I think that's host.
+
+
+The upstream fix for the kernel should be backported to trusty
+
+Just hit the same bug (decreasing steal time counters after live migration of kvm) on a Trusty guest. Would be nice to get that as LTS.
+
+The backport has been acked in upstream for 3.4, 3.10, 3.14, and 3.16 stable kernel.
+
+Fix released for Wily (4.2.0-36.41)
+
+Fix released for Vivid (3.19.0-59.65)
+
+This bug is awaiting verification that the kernel in -proposed solves the problem. Please test the kernel and update this bug with the results. If the problem is solved, change the tag 'verification-needed-trusty' to 'verification-done-trusty'.
+
+If verification is not done by 5 working days from today, this fix will be dropped from the source code, and this bug will be closed.
+
+See https://wiki.ubuntu.com/Testing/EnableProposed for documentation how to enable and use -proposed. Thank you!
+
+
+The proposed package is tested and it passed the test case described in the bug description.
+
+This bug was fixed in the package linux - 3.13.0-91.138
+
+---------------
+linux (3.13.0-91.138) trusty; urgency=medium
+
+  [ Luis Henriques ]
+
+  * Release Tracking Bug
+    - LP: #1595991
+
+  [ Upstream Kernel Changes ]
+
+  * netfilter: x_tables: validate e->target_offset early
+    - LP: #1555338
+    - CVE-2016-3134
+  * netfilter: x_tables: make sure e->next_offset covers remaining blob
+    size
+    - LP: #1555338
+    - CVE-2016-3134
+  * netfilter: x_tables: fix unconditional helper
+    - LP: #1555338
+    - CVE-2016-3134
+  * netfilter: x_tables: don't move to non-existent next rule
+    - LP: #1595350
+  * netfilter: x_tables: validate targets of jumps
+    - LP: #1595350
+  * netfilter: x_tables: add and use xt_check_entry_offsets
+    - LP: #1595350
+  * netfilter: x_tables: kill check_entry helper
+    - LP: #1595350
+  * netfilter: x_tables: assert minimum target size
+    - LP: #1595350
+  * netfilter: x_tables: add compat version of xt_check_entry_offsets
+    - LP: #1595350
+  * netfilter: x_tables: check standard target size too
+    - LP: #1595350
+  * netfilter: x_tables: check for bogus target offset
+    - LP: #1595350
+  * netfilter: x_tables: validate all offsets and sizes in a rule
+    - LP: #1595350
+  * netfilter: x_tables: don't reject valid target size on some
+    architectures
+    - LP: #1595350
+  * netfilter: arp_tables: simplify translate_compat_table args
+    - LP: #1595350
+  * netfilter: ip_tables: simplify translate_compat_table args
+    - LP: #1595350
+  * netfilter: ip6_tables: simplify translate_compat_table args
+    - LP: #1595350
+  * netfilter: x_tables: xt_compat_match_from_user doesn't need a retval
+    - LP: #1595350
+  * netfilter: x_tables: do compat validation via translate_table
+    - LP: #1595350
+  * netfilter: x_tables: introduce and use xt_copy_counters_from_user
+    - LP: #1595350
+
+linux (3.13.0-90.137) trusty; urgency=low
+
+  [ Kamal Mostafa ]
+
+  * Release Tracking Bug
+    - LP: #1595693
+
+  [ Serge Hallyn ]
+
+  * SAUCE: add a sysctl to disable unprivileged user namespace unsharing
+    - LP: #1555338, #1595350
+
+linux (3.13.0-89.136) trusty; urgency=low
+
+  [ Kamal Mostafa ]
+
+  * Release Tracking Bug
+    - LP: #1591315
+
+  [ Kamal Mostafa ]
+
+  * [debian] getabis: Only git add $abidir if running in local repo
+    - LP: #1584890
+  * [debian] getabis: Fix inconsistent compiler versions check
+    - LP: #1584890
+
+  [ Stefan Bader ]
+
+  * SAUCE: powerpc/powernv: Fix incomplete backport of 8117ac6
+    - LP: #1589910
+
+  [ Tim Gardner ]
+
+  * [Config] Remove arc4 from nic-modules
+    - LP: #1582991
+
+  [ Upstream Kernel Changes ]
+
+  * KVM: x86: move steal time initialization to vcpu entry time
+    - LP: #1494350
+  * lpfc: Fix premature release of rpi bit in bitmask
+    - LP: #1580560
+  * lpfc: Correct loss of target discovery after cable swap.
+    - LP: #1580560
+  * mm/balloon_compaction: redesign ballooned pages management
+    - LP: #1572562
+  * mm/balloon_compaction: fix deflation when compaction is disabled
+    - LP: #1572562
+  * bridge: Fix the way to find old local fdb entries in br_fdb_changeaddr
+    - LP: #1581585
+  * bridge: notify user space after fdb update
+    - LP: #1581585
+  * ALSA: timer: Fix leak in SNDRV_TIMER_IOCTL_PARAMS
+    - LP: #1580379
+    - CVE-2016-4569
+  * ALSA: timer: Fix leak in events via snd_timer_user_ccallback
+    - LP: #1581866
+    - CVE-2016-4578
+  * ALSA: timer: Fix leak in events via snd_timer_user_tinterrupt
+    - LP: #1581866
+    - CVE-2016-4578
+  * net: fix a kernel infoleak in x25 module
+    - LP: #1585366
+    - CVE-2016-4580
+  * get_rock_ridge_filename(): handle malformed NM entries
+    - LP: #1583962
+    - CVE-2016-4913
+  * netfilter: Set /proc/net entries owner to root in namespace
+    - LP: #1584953
+  * USB: usbfs: fix potential infoleak in devio
+    - LP: #1578493
+    - CVE-2016-4482
+  * IB/security: Restrict use of the write() interface
+    - LP: #1580372
+    - CVE-2016-4565
+  * netlink: autosize skb lengthes
+    - LP: #1568969
+  * xfs: allow inode allocations in post-growfs disk space
+    - LP: #1560142
+
+ -- Luis Henriques <email address hidden>  Fri, 24 Jun 2016 16:19:03 +0100
+
+This bug was fixed in the package linux - 3.13.0-91.138
+
+---------------
+linux (3.13.0-91.138) trusty; urgency=medium
+
+  [ Luis Henriques ]
+
+  * Release Tracking Bug
+    - LP: #1595991
+
+  [ Upstream Kernel Changes ]
+
+  * netfilter: x_tables: validate e->target_offset early
+    - LP: #1555338
+    - CVE-2016-3134
+  * netfilter: x_tables: make sure e->next_offset covers remaining blob
+    size
+    - LP: #1555338
+    - CVE-2016-3134
+  * netfilter: x_tables: fix unconditional helper
+    - LP: #1555338
+    - CVE-2016-3134
+  * netfilter: x_tables: don't move to non-existent next rule
+    - LP: #1595350
+  * netfilter: x_tables: validate targets of jumps
+    - LP: #1595350
+  * netfilter: x_tables: add and use xt_check_entry_offsets
+    - LP: #1595350
+  * netfilter: x_tables: kill check_entry helper
+    - LP: #1595350
+  * netfilter: x_tables: assert minimum target size
+    - LP: #1595350
+  * netfilter: x_tables: add compat version of xt_check_entry_offsets
+    - LP: #1595350
+  * netfilter: x_tables: check standard target size too
+    - LP: #1595350
+  * netfilter: x_tables: check for bogus target offset
+    - LP: #1595350
+  * netfilter: x_tables: validate all offsets and sizes in a rule
+    - LP: #1595350
+  * netfilter: x_tables: don't reject valid target size on some
+    architectures
+    - LP: #1595350
+  * netfilter: arp_tables: simplify translate_compat_table args
+    - LP: #1595350
+  * netfilter: ip_tables: simplify translate_compat_table args
+    - LP: #1595350
+  * netfilter: ip6_tables: simplify translate_compat_table args
+    - LP: #1595350
+  * netfilter: x_tables: xt_compat_match_from_user doesn't need a retval
+    - LP: #1595350
+  * netfilter: x_tables: do compat validation via translate_table
+    - LP: #1595350
+  * netfilter: x_tables: introduce and use xt_copy_counters_from_user
+    - LP: #1595350
+
+linux (3.13.0-90.137) trusty; urgency=low
+
+  [ Kamal Mostafa ]
+
+  * Release Tracking Bug
+    - LP: #1595693
+
+  [ Serge Hallyn ]
+
+  * SAUCE: add a sysctl to disable unprivileged user namespace unsharing
+    - LP: #1555338, #1595350
+
+linux (3.13.0-89.136) trusty; urgency=low
+
+  [ Kamal Mostafa ]
+
+  * Release Tracking Bug
+    - LP: #1591315
+
+  [ Kamal Mostafa ]
+
+  * [debian] getabis: Only git add $abidir if running in local repo
+    - LP: #1584890
+  * [debian] getabis: Fix inconsistent compiler versions check
+    - LP: #1584890
+
+  [ Stefan Bader ]
+
+  * SAUCE: powerpc/powernv: Fix incomplete backport of 8117ac6
+    - LP: #1589910
+
+  [ Tim Gardner ]
+
+  * [Config] Remove arc4 from nic-modules
+    - LP: #1582991
+
+  [ Upstream Kernel Changes ]
+
+  * KVM: x86: move steal time initialization to vcpu entry time
+    - LP: #1494350
+  * lpfc: Fix premature release of rpi bit in bitmask
+    - LP: #1580560
+  * lpfc: Correct loss of target discovery after cable swap.
+    - LP: #1580560
+  * mm/balloon_compaction: redesign ballooned pages management
+    - LP: #1572562
+  * mm/balloon_compaction: fix deflation when compaction is disabled
+    - LP: #1572562
+  * bridge: Fix the way to find old local fdb entries in br_fdb_changeaddr
+    - LP: #1581585
+  * bridge: notify user space after fdb update
+    - LP: #1581585
+  * ALSA: timer: Fix leak in SNDRV_TIMER_IOCTL_PARAMS
+    - LP: #1580379
+    - CVE-2016-4569
+  * ALSA: timer: Fix leak in events via snd_timer_user_ccallback
+    - LP: #1581866
+    - CVE-2016-4578
+  * ALSA: timer: Fix leak in events via snd_timer_user_tinterrupt
+    - LP: #1581866
+    - CVE-2016-4578
+  * net: fix a kernel infoleak in x25 module
+    - LP: #1585366
+    - CVE-2016-4580
+  * get_rock_ridge_filename(): handle malformed NM entries
+    - LP: #1583962
+    - CVE-2016-4913
+  * netfilter: Set /proc/net entries owner to root in namespace
+    - LP: #1584953
+  * USB: usbfs: fix potential infoleak in devio
+    - LP: #1578493
+    - CVE-2016-4482
+  * IB/security: Restrict use of the write() interface
+    - LP: #1580372
+    - CVE-2016-4565
+  * netlink: autosize skb lengthes
+    - LP: #1568969
+  * xfs: allow inode allocations in post-growfs disk space
+    - LP: #1560142
+
+ -- Luis Henriques <email address hidden>  Fri, 24 Jun 2016 16:19:03 +0100
+