diff options
| author | Eric Blake <eblake@redhat.com> | 2022-05-11 19:49:24 -0500 |
|---|---|---|
| committer | Kevin Wolf <kwolf@redhat.com> | 2022-05-12 13:10:52 +0200 |
| commit | 58a6fdcc9efb2a7c1ef4893dca4aa5e8020ca3dc (patch) | |
| tree | 2dba1567609a2df70bbb9e01e2fb89b18b2238ee /blockdev-nbd.c | |
| parent | a5fced40212ed73c715ca298a2929dd4d99c9999 (diff) | |
| download | focaccia-qemu-58a6fdcc9efb2a7c1ef4893dca4aa5e8020ca3dc.tar.gz focaccia-qemu-58a6fdcc9efb2a7c1ef4893dca4aa5e8020ca3dc.zip | |
nbd/server: Allow MULTI_CONN for shared writable exports
According to the NBD spec, a server that advertises NBD_FLAG_CAN_MULTI_CONN promises that multiple client connections will not see any cache inconsistencies: when properly separated by a single flush, actions performed by one client will be visible to another client, regardless of which client did the flush. We always satisfy these conditions in qemu - even when we support multiple clients, ALL clients go through a single point of reference into the block layer, with no local caching. The effect of one client is instantly visible to the next client. Even if our backend were a network device, we argue that any multi-path caching effects that would cause inconsistencies in back-to-back actions not seeing the effect of previous actions would be a bug in that backend, and not the fault of caching in qemu. As such, it is safe to unconditionally advertise CAN_MULTI_CONN for any qemu NBD server situation that supports parallel clients. Note, however, that we don't want to advertise CAN_MULTI_CONN when we know that a second client cannot connect (for historical reasons, qemu-nbd defaults to a single connection while nbd-server-add and QMP commands default to unlimited connections; but we already have existing means to let either style of NBD server creation alter those defaults). This is visible by no longer advertising MULTI_CONN for 'qemu-nbd -r' without -e, as in the iotest nbd-qemu-allocation. The harder part of this patch is setting up an iotest to demonstrate behavior of multiple NBD clients to a single server. It might be possible with parallel qemu-io processes, but I found it easier to do in python with the help of libnbd, and help from Nir and Vladimir in writing the test. Signed-off-by: Eric Blake <eblake@redhat.com> Suggested-by: Nir Soffer <nsoffer@redhat.com> Suggested-by: Vladimir Sementsov-Ogievskiy <v.sementsov-og@mail.ru> Message-Id: <20220512004924.417153-3-eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Diffstat (limited to 'blockdev-nbd.c')
| -rw-r--r-- | blockdev-nbd.c | 5 |
1 files changed, 5 insertions, 0 deletions
diff --git a/blockdev-nbd.c b/blockdev-nbd.c index 711e0e72bd..012256bb02 100644 --- a/blockdev-nbd.c +++ b/blockdev-nbd.c @@ -44,6 +44,11 @@ bool nbd_server_is_running(void) return nbd_server || qemu_nbd_connections >= 0; } +int nbd_server_max_connections(void) +{ + return nbd_server ? nbd_server->max_connections : qemu_nbd_connections; +} + static void nbd_blockdev_client_closed(NBDClient *client, bool ignored) { nbd_client_put(client); |