summary refs log tree commit diff stats
path: root/memory.c
diff options
context:
space:
mode:
authorPaolo Bonzini <pbonzini@redhat.com>2018-02-06 18:37:39 +0100
committerPaolo Bonzini <pbonzini@redhat.com>2019-08-20 17:26:20 +0200
commit9458a9a1df1a4c719e24512394d548c1fc7abd22 (patch)
tree2e7dd0685486a403fda9fb52c70406a04637ef8b /memory.c
parent1e8a98b53867f61da9ca09f411288e2085d323c4 (diff)
downloadfocaccia-qemu-9458a9a1df1a4c719e24512394d548c1fc7abd22.tar.gz
focaccia-qemu-9458a9a1df1a4c719e24512394d548c1fc7abd22.zip
memory: fix race between TCG and accesses to dirty bitmap
There is a race between TCG and accesses to the dirty log:

      vCPU thread                  reader thread
      -----------------------      -----------------------
      TLB check -> slow path
        notdirty_mem_write
          write to RAM
          set dirty flag
                                   clear dirty flag
      TLB check -> fast path
                                   read memory
        write to RAM

Fortunately, in order to fix it, no change is required to the
vCPU thread.  However, the reader thread must delay the read after
the vCPU thread has finished the write.  This can be approximated
conservatively by run_on_cpu, which waits for the end of the current
translation block.

A similar technique is used by KVM, which has to do a synchronous TLB
flush after doing a test-and-clear of the dirty-page flags.

Reported-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'memory.c')
-rw-r--r--memory.c10
1 files changed, 9 insertions, 1 deletions
diff --git a/memory.c b/memory.c
index c90a2cf629..4aa38eb5b1 100644
--- a/memory.c
+++ b/memory.c
@@ -2127,9 +2127,12 @@ DirtyBitmapSnapshot *memory_region_snapshot_and_clear_dirty(MemoryRegion *mr,
                                                             hwaddr size,
                                                             unsigned client)
 {
+    DirtyBitmapSnapshot *snapshot;
     assert(mr->ram_block);
     memory_region_sync_dirty_bitmap(mr);
-    return cpu_physical_memory_snapshot_and_clear_dirty(mr, addr, size, client);
+    snapshot = cpu_physical_memory_snapshot_and_clear_dirty(mr, addr, size, client);
+    memory_global_after_dirty_log_sync();
+    return snapshot;
 }
 
 bool memory_region_snapshot_get_dirty(MemoryRegion *mr, DirtyBitmapSnapshot *snap,
@@ -2620,6 +2623,11 @@ void memory_global_dirty_log_sync(void)
     memory_region_sync_dirty_bitmap(NULL);
 }
 
+void memory_global_after_dirty_log_sync(void)
+{
+    MEMORY_LISTENER_CALL_GLOBAL(log_global_after_sync, Forward);
+}
+
 static VMChangeStateEntry *vmstate_change;
 
 void memory_global_dirty_log_start(void)