diff options
| author | Paolo Bonzini <pbonzini@redhat.com> | 2021-04-13 10:20:32 +0200 |
|---|---|---|
| committer | Paolo Bonzini <pbonzini@redhat.com> | 2021-05-04 14:15:35 +0200 |
| commit | 4951967d84a0acbf47895add9158e2d4c6056ea0 (patch) | |
| tree | d458223a160c77b2ff7a2defca4efe6d08d9358f /include/hw/misc/stm32f4xx_exti.h | |
| parent | 39becfce13e7de74045002950d899d91190a224b (diff) | |
| download | focaccia-qemu-4951967d84a0acbf47895add9158e2d4c6056ea0.tar.gz focaccia-qemu-4951967d84a0acbf47895add9158e2d4c6056ea0.zip | |
ratelimit: protect with a mutex
Right now, rate limiting is protected by the AioContext mutex, which is taken for example both by the block jobs and by qmp_block_job_set_speed (via find_block_job). We would like to remove the dependency of block layer code on the AioContext mutex, since most drivers and the core I/O code are already not relying on it. However, there is no existing lock that can easily be taken by both ratelimit_set_speed and ratelimit_calculate_delay, especially because the latter might run in coroutine context (and therefore under a CoMutex) but the former will not. Since concurrent calls to ratelimit_calculate_delay are not possible, one idea could be to use a seqlock to get a snapshot of slice_ns and slice_quota. But for now keep it simple, and just add a mutex to the RateLimit struct; block jobs are generally not performance critical to the point of optimizing the clock cycles spent in synchronization. This also requires the introduction of init/destroy functions, so add them to the two users of ratelimit.h. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'include/hw/misc/stm32f4xx_exti.h')
0 files changed, 0 insertions, 0 deletions