diff options
| author | Christian Schoenebeck <qemu_oss@crudebyte.com> | 2020-07-29 10:13:05 +0200 |
|---|---|---|
| committer | Christian Schoenebeck <qemu_oss@crudebyte.com> | 2020-08-12 09:17:32 +0200 |
| commit | 0c4356ba7dafc8ecb5877a42fc0d68d45ccf5951 (patch) | |
| tree | b0438d3aa8d13d4a86eb66456fcc06d1fc0301b2 /hw/hyperv/hyperv_testdev.c | |
| parent | 2149675b195f2d9a1a4e3b966d45aba234def69b (diff) | |
| download | focaccia-qemu-0c4356ba7dafc8ecb5877a42fc0d68d45ccf5951.tar.gz focaccia-qemu-0c4356ba7dafc8ecb5877a42fc0d68d45ccf5951.zip | |
9pfs: T_readdir latency optimization
Make top half really top half and bottom half really bottom half: Each T_readdir request handling is hopping between threads (main I/O thread and background I/O driver threads) several times for every individual directory entry, which sums up to huge latencies for handling just a single T_readdir request. Instead of doing that, collect now all required directory entries (including all potentially required stat buffers for each entry) in one rush on a background I/O thread from fs driver by calling the previously added function v9fs_co_readdir_many() instead of v9fs_co_readdir(), then assemble the entire resulting network response message for the readdir request on main I/O thread. The fs driver is still aborting the directory entry retrieval loop (on the background I/O thread inside of v9fs_co_readdir_many()) as soon as it would exceed the client's requested maximum R_readdir response size. So this will not introduce a performance penalty on another end. Also: No longer seek initial directory position in v9fs_readdir(), as this is now handled (more consistently) by v9fs_co_readdir_many() instead. Signed-off-by: Christian Schoenebeck <qemu_oss@crudebyte.com> Message-Id: <c7c3d1cf4e86611538cef44897842819d9359d7a.1596012787.git.qemu_oss@crudebyte.com> Signed-off-by: Christian Schoenebeck <qemu_oss@crudebyte.com>
Diffstat (limited to 'hw/hyperv/hyperv_testdev.c')
0 files changed, 0 insertions, 0 deletions