diff options
Diffstat (limited to 'docs')
| -rw-r--r-- | docs/devel/decodetree.rst | 221 | ||||
| -rw-r--r-- | docs/devel/index.rst | 2 | ||||
| -rw-r--r-- | docs/devel/s390-dasd-ipl.txt | 133 | ||||
| -rw-r--r-- | docs/interop/qcow2.txt | 9 | ||||
| -rw-r--r-- | docs/interop/vhost-user.json | 232 | ||||
| -rw-r--r-- | docs/interop/vhost-user.txt | 386 | ||||
| -rw-r--r-- | docs/nvdimm.txt | 22 | ||||
| -rw-r--r-- | docs/qemu-cpu-models.texi | 28 |
8 files changed, 1020 insertions, 13 deletions
diff --git a/docs/devel/decodetree.rst b/docs/devel/decodetree.rst new file mode 100644 index 0000000000..44ac621ea8 --- /dev/null +++ b/docs/devel/decodetree.rst @@ -0,0 +1,221 @@ +======================== +Decodetree Specification +======================== + +A *decodetree* is built from instruction *patterns*. A pattern may +represent a single architectural instruction or a group of same, depending +on what is convenient for further processing. + +Each pattern has both *fixedbits* and *fixedmask*, the combination of which +describes the condition under which the pattern is matched:: + + (insn & fixedmask) == fixedbits + +Each pattern may have *fields*, which are extracted from the insn and +passed along to the translator. Examples of such are registers, +immediates, and sub-opcodes. + +In support of patterns, one may declare *fields*, *argument sets*, and +*formats*, each of which may be re-used to simplify further definitions. + +Fields +====== + +Syntax:: + + field_def := '%' identifier ( unnamed_field )+ ( !function=identifier )? + unnamed_field := number ':' ( 's' ) number + +For *unnamed_field*, the first number is the least-significant bit position +of the field and the second number is the length of the field. If the 's' is +present, the field is considered signed. If multiple ``unnamed_fields`` are +present, they are concatenated. In this way one can define disjoint fields. + +If ``!function`` is specified, the concatenated result is passed through the +named function, taking and returning an integral value. + +FIXME: the fields of the structure into which this result will be stored +is restricted to ``int``. Which means that we cannot expand 64-bit items. + +Field examples: + ++---------------------------+---------------------------------------------+ +| Input | Generated code | ++===========================+=============================================+ +| %disp 0:s16 | sextract(i, 0, 16) | ++---------------------------+---------------------------------------------+ +| %imm9 16:6 10:3 | extract(i, 16, 6) << 3 | extract(i, 10, 3) | ++---------------------------+---------------------------------------------+ +| %disp12 0:s1 1:1 2:10 | sextract(i, 0, 1) << 11 | | +| | extract(i, 1, 1) << 10 | | +| | extract(i, 2, 10) | ++---------------------------+---------------------------------------------+ +| %shimm8 5:s8 13:1 | expand_shimm8(sextract(i, 5, 8) << 1 | | +| !function=expand_shimm8 | extract(i, 13, 1)) | ++---------------------------+---------------------------------------------+ + +Argument Sets +============= + +Syntax:: + + args_def := '&' identifier ( args_elt )+ ( !extern )? + args_elt := identifier + +Each *args_elt* defines an argument within the argument set. +Each argument set will be rendered as a C structure "arg_$name" +with each of the fields being one of the member arguments. + +If ``!extern`` is specified, the backing structure is assumed +to have been already declared, typically via a second decoder. + +Argument sets are useful when one wants to define helper functions +for the translator functions that can perform operations on a common +set of arguments. This can ensure, for instance, that the ``AND`` +pattern and the ``OR`` pattern put their operands into the same named +structure, so that a common ``gen_logic_insn`` may be able to handle +the operations common between the two. + +Argument set examples:: + + ®3 ra rb rc + &loadstore reg base offset + + +Formats +======= + +Syntax:: + + fmt_def := '@' identifier ( fmt_elt )+ + fmt_elt := fixedbit_elt | field_elt | field_ref | args_ref + fixedbit_elt := [01.-]+ + field_elt := identifier ':' 's'? number + field_ref := '%' identifier | identifier '=' '%' identifier + args_ref := '&' identifier + +Defining a format is a handy way to avoid replicating groups of fields +across many instruction patterns. + +A *fixedbit_elt* describes a contiguous sequence of bits that must +be 1, 0, or don't care. The difference between '.' and '-' +is that '.' means that the bit will be covered with a field or a +final 0 or 1 from the pattern, and '-' means that the bit is really +ignored by the cpu and will not be specified. + +A *field_elt* describes a simple field only given a width; the position of +the field is implied by its position with respect to other *fixedbit_elt* +and *field_elt*. + +If any *fixedbit_elt* or *field_elt* appear, then all bits must be defined. +Padding with a *fixedbit_elt* of all '.' is an easy way to accomplish that. + +A *field_ref* incorporates a field by reference. This is the only way to +add a complex field to a format. A field may be renamed in the process +via assignment to another identifier. This is intended to allow the +same argument set be used with disjoint named fields. + +A single *args_ref* may specify an argument set to use for the format. +The set of fields in the format must be a subset of the arguments in +the argument set. If an argument set is not specified, one will be +inferred from the set of fields. + +It is recommended, but not required, that all *field_ref* and *args_ref* +appear at the end of the line, not interleaving with *fixedbit_elf* or +*field_elt*. + +Format examples:: + + @opr ...... ra:5 rb:5 ... 0 ....... rc:5 + @opi ...... ra:5 lit:8 1 ....... rc:5 + +Patterns +======== + +Syntax:: + + pat_def := identifier ( pat_elt )+ + pat_elt := fixedbit_elt | field_elt | field_ref | args_ref | fmt_ref | const_elt + fmt_ref := '@' identifier + const_elt := identifier '=' number + +The *fixedbit_elt* and *field_elt* specifiers are unchanged from formats. +A pattern that does not specify a named format will have one inferred +from a referenced argument set (if present) and the set of fields. + +A *const_elt* allows a argument to be set to a constant value. This may +come in handy when fields overlap between patterns and one has to +include the values in the *fixedbit_elt* instead. + +The decoder will call a translator function for each pattern matched. + +Pattern examples:: + + addl_r 010000 ..... ..... .... 0000000 ..... @opr + addl_i 010000 ..... ..... .... 0000000 ..... @opi + +which will, in part, invoke:: + + trans_addl_r(ctx, &arg_opr, insn) + +and:: + + trans_addl_i(ctx, &arg_opi, insn) + +Pattern Groups +============== + +Syntax:: + + group := '{' ( pat_def | group )+ '}' + +A *group* begins with a lone open-brace, with all subsequent lines +indented two spaces, and ending with a lone close-brace. Groups +may be nested, increasing the required indentation of the lines +within the nested group to two spaces per nesting level. + +Unlike ungrouped patterns, grouped patterns are allowed to overlap. +Conflicts are resolved by selecting the patterns in order. If all +of the fixedbits for a pattern match, its translate function will +be called. If the translate function returns false, then subsequent +patterns within the group will be matched. + +The following example from PA-RISC shows specialization of the *or* +instruction:: + + { + { + nop 000010 ----- ----- 0000 001001 0 00000 + copy 000010 00000 r1:5 0000 001001 0 rt:5 + } + or 000010 rt2:5 r1:5 cf:4 001001 0 rt:5 + } + +When the *cf* field is zero, the instruction has no side effects, +and may be specialized. When the *rt* field is zero, the output +is discarded and so the instruction has no effect. When the *rt2* +field is zero, the operation is ``reg[rt] | 0`` and so encodes +the canonical register copy operation. + +The output from the generator might look like:: + + switch (insn & 0xfc000fe0) { + case 0x08000240: + /* 000010.. ........ ....0010 010..... */ + if ((insn & 0x0000f000) == 0x00000000) { + /* 000010.. ........ 00000010 010..... */ + if ((insn & 0x0000001f) == 0x00000000) { + /* 000010.. ........ 00000010 01000000 */ + extract_decode_Fmt_0(&u.f_decode0, insn); + if (trans_nop(ctx, &u.f_decode0)) return true; + } + if ((insn & 0x03e00000) == 0x00000000) { + /* 00001000 000..... 00000010 010..... */ + extract_decode_Fmt_1(&u.f_decode1, insn); + if (trans_copy(ctx, &u.f_decode1)) return true; + } + } + extract_decode_Fmt_2(&u.f_decode2, insn); + if (trans_or(ctx, &u.f_decode2)) return true; + return false; + } diff --git a/docs/devel/index.rst b/docs/devel/index.rst index 6b11e49caa..ebbab636ce 100644 --- a/docs/devel/index.rst +++ b/docs/devel/index.rst @@ -19,4 +19,4 @@ Contents: migration stable-process testing - + decodetree diff --git a/docs/devel/s390-dasd-ipl.txt b/docs/devel/s390-dasd-ipl.txt new file mode 100644 index 0000000000..9107e048e4 --- /dev/null +++ b/docs/devel/s390-dasd-ipl.txt @@ -0,0 +1,133 @@ +***************************** +***** s390 hardware IPL ***** +***************************** + +The s390 hardware IPL process consists of the following steps. + +1. A READ IPL ccw is constructed in memory location 0x0. + This ccw, by definition, reads the IPL1 record which is located on the disk + at cylinder 0 track 0 record 1. Note that the chain flag is on in this ccw + so when it is complete another ccw will be fetched and executed from memory + location 0x08. + +2. Execute the Read IPL ccw at 0x00, thereby reading IPL1 data into 0x00. + IPL1 data is 24 bytes in length and consists of the following pieces of + information: [psw][read ccw][tic ccw]. When the machine executes the Read + IPL ccw it read the 24-bytes of IPL1 to be read into memory starting at + location 0x0. Then the ccw program at 0x08 which consists of a read + ccw and a tic ccw is automatically executed because of the chain flag from + the original READ IPL ccw. The read ccw will read the IPL2 data into memory + and the TIC (Transfer In Channel) will transfer control to the channel + program contained in the IPL2 data. The TIC channel command is the + equivalent of a branch/jump/goto instruction for channel programs. + NOTE: The ccws in IPL1 are defined by the architecture to be format 0. + +3. Execute IPL2. + The TIC ccw instruction at the end of the IPL1 channel program will begin + the execution of the IPL2 channel program. IPL2 is stage-2 of the boot + process and will contain a larger channel program than IPL1. The point of + IPL2 is to find and load either the operating system or a small program that + loads the operating system from disk. At the end of this step all or some of + the real operating system is loaded into memory and we are ready to hand + control over to the guest operating system. At this point the guest + operating system is entirely responsible for loading any more data it might + need to function. NOTE: The IPL2 channel program might read data into memory + location 0 thereby overwriting the IPL1 psw and channel program. This is ok + as long as the data placed in location 0 contains a psw whose instruction + address points to the guest operating system code to execute at the end of + the IPL/boot process. + NOTE: The ccws in IPL2 are defined by the architecture to be format 0. + +4. Start executing the guest operating system. + The psw that was loaded into memory location 0 as part of the ipl process + should contain the needed flags for the operating system we have loaded. The + psw's instruction address will point to the location in memory where we want + to start executing the operating system. This psw is loaded (via LPSW + instruction) causing control to be passed to the operating system code. + +In a non-virtualized environment this process, handled entirely by the hardware, +is kicked off by the user initiating a "Load" procedure from the hardware +management console. This "Load" procedure crafts a special "Read IPL" ccw in +memory location 0x0 that reads IPL1. It then executes this ccw thereby kicking +off the reading of IPL1 data. Since the channel program from IPL1 will be +written immediately after the special "Read IPL" ccw, the IPL1 channel program +will be executed immediately (the special read ccw has the chaining bit turned +on). The TIC at the end of the IPL1 channel program will cause the IPL2 channel +program to be executed automatically. After this sequence completes the "Load" +procedure then loads the psw from 0x0. + +********************************************************** +***** How this all pertains to QEMU (and the kernel) ***** +********************************************************** + +In theory we should merely have to do the following to IPL/boot a guest +operating system from a DASD device: + +1. Place a "Read IPL" ccw into memory location 0x0 with chaining bit on. +2. Execute channel program at 0x0. +3. LPSW 0x0. + +However, our emulation of the machine's channel program logic within the kernel +is missing one key feature that is required for this process to work: +non-prefetch of ccw data. + +When we start a channel program we pass the channel subsystem parameters via an +ORB (Operation Request Block). One of those parameters is a prefetch bit. If the +bit is on then the vfio-ccw kernel driver is allowed to read the entire channel +program from guest memory before it starts executing it. This means that any +channel commands that read additional channel commands will not work as expected +because the newly read commands will only exist in guest memory and NOT within +the kernel's channel subsystem memory. The kernel vfio-ccw driver currently +requires this bit to be on for all channel programs. This is a problem because +the IPL process consists of transferring control from the "Read IPL" ccw +immediately to the IPL1 channel program that was read by "Read IPL". + +Not being able to turn off prefetch will also prevent the TIC at the end of the +IPL1 channel program from transferring control to the IPL2 channel program. + +Lastly, in some cases (the zipl bootloader for example) the IPL2 program also +transfers control to another channel program segment immediately after reading +it from the disk. So we need to be able to handle this case. + +************************** +***** What QEMU does ***** +************************** + +Since we are forced to live with prefetch we cannot use the very simple IPL +procedure we defined in the preceding section. So we compensate by doing the +following. + +1. Place "Read IPL" ccw into memory location 0x0, but turn off chaining bit. +2. Execute "Read IPL" at 0x0. + + So now IPL1's psw is at 0x0 and IPL1's channel program is at 0x08. + +4. Write a custom channel program that will seek to the IPL2 record and then + execute the READ and TIC ccws from IPL1. Normally the seek is not required + because after reading the IPL1 record the disk is automatically positioned + to read the very next record which will be IPL2. But since we are not reading + both IPL1 and IPL2 as part of the same channel program we must manually set + the position. + +5. Grab the target address of the TIC instruction from the IPL1 channel program. + This address is where the IPL2 channel program starts. + + Now IPL2 is loaded into memory somewhere, and we know the address. + +6. Execute the IPL2 channel program at the address obtained in step #5. + + Because this channel program can be dynamic, we must use a special algorithm + that detects a READ immediately followed by a TIC and breaks the ccw chain + by turning off the chain bit in the READ ccw. When control is returned from + the kernel/hardware to the QEMU bios code we immediately issue another start + subchannel to execute the remaining TIC instruction. This causes the entire + channel program (starting from the TIC) and all needed data to be refetched + thereby stepping around the limitation that would otherwise prevent this + channel program from executing properly. + + Now the operating system code is loaded somewhere in guest memory and the psw + in memory location 0x0 will point to entry code for the guest operating + system. + +7. LPSW 0x0. + LPSW transfers control to the guest operating system and we're done. diff --git a/docs/interop/qcow2.txt b/docs/interop/qcow2.txt index 8c3098d8d9..af5711e533 100644 --- a/docs/interop/qcow2.txt +++ b/docs/interop/qcow2.txt @@ -633,7 +633,10 @@ Structure of a bitmap directory entry: Bit 0: in_use The bitmap was not saved correctly and may be - inconsistent. + inconsistent. Although the bitmap metadata is still + well-formed from a qcow2 perspective, the metadata + (such as the auto flag or bitmap size) or data + contents may be outdated. 1: auto The bitmap must reflect all changes of the virtual @@ -761,8 +764,8 @@ corresponding range of the virtual disk (see above) was written to while the bitmap was 'enabled'. An unset bit means that this range was not written to. The software doesn't have to sync the bitmap in the image file with its -representation in RAM after each write. Flag 'in_use' should be set while the -bitmap is not synced. +representation in RAM after each write or metadata change. Flag 'in_use' +should be set while the bitmap is not synced. In the image file the 'enabled' state is reflected by the 'auto' flag. If this flag is set, the software must consider the bitmap as 'enabled' and start diff --git a/docs/interop/vhost-user.json b/docs/interop/vhost-user.json new file mode 100644 index 0000000000..ae88c03117 --- /dev/null +++ b/docs/interop/vhost-user.json @@ -0,0 +1,232 @@ +# -*- Mode: Python -*- +# +# Copyright (C) 2018 Red Hat, Inc. +# +# Authors: +# Marc-AndrĂ© Lureau <marcandre.lureau@redhat.com> +# +# This work is licensed under the terms of the GNU GPL, version 2 or +# later. See the COPYING file in the top-level directory. + +## +# = vhost user backend discovery & capabilities +## + +## +# @VHostUserBackendType: +# +# List the various vhost user backend types. +# +# @9p: 9p virtio console +# @balloon: virtio balloon +# @block: virtio block +# @caif: virtio caif +# @console: virtio console +# @crypto: virtio crypto +# @gpu: virtio gpu +# @input: virtio input +# @net: virtio net +# @rng: virtio rng +# @rpmsg: virtio remote processor messaging +# @rproc-serial: virtio remoteproc serial link +# @scsi: virtio scsi +# @vsock: virtio vsock transport +# +# Since: 4.0 +## +{ + 'enum': 'VHostUserBackendType', + 'data': [ + '9p', + 'balloon', + 'block', + 'caif', + 'console', + 'crypto', + 'gpu', + 'input', + 'net', + 'rng', + 'rpmsg', + 'rproc-serial', + 'scsi', + 'vsock' + ] +} + +## +# @VHostUserBackendInputFeature: +# +# List of vhost user "input" features. +# +# @evdev-path: The --evdev-path command line option is supported. +# @no-grab: The --no-grab command line option is supported. +# +# Since: 4.0 +## +{ + 'enum': 'VHostUserBackendInputFeature', + 'data': [ 'evdev-path', 'no-grab' ] +} + +## +# @VHostUserBackendCapabilitiesInput: +# +# Capabilities reported by vhost user "input" backends +# +# @features: list of supported features. +# +# Since: 4.0 +## +{ + 'struct': 'VHostUserBackendCapabilitiesInput', + 'data': { + 'features': [ 'VHostUserBackendInputFeature' ] + } +} + +## +# @VHostUserBackendGPUFeature: +# +# List of vhost user "gpu" features. +# +# @render-node: The --render-node command line option is supported. +# @virgl: The --virgl command line option is supported. +# +# Since: 4.0 +## +{ + 'enum': 'VHostUserBackendGPUFeature', + 'data': [ 'render-node', 'virgl' ] +} + +## +# @VHostUserBackendCapabilitiesGPU: +# +# Capabilities reported by vhost user "gpu" backends. +# +# @features: list of supported features. +# +# Since: 4.0 +## +{ + 'struct': 'VHostUserBackendCapabilitiesGPU', + 'data': { + 'features': [ 'VHostUserBackendGPUFeature' ] + } +} + +## +# @VHostUserBackendCapabilities: +# +# Capabilities reported by vhost user backends. +# +# @type: The vhost user backend type. +# +# Since: 4.0 +## +{ + 'union': 'VHostUserBackendCapabilities', + 'base': { 'type': 'VHostUserBackendType' }, + 'discriminator': 'type', + 'data': { + 'input': 'VHostUserBackendCapabilitiesInput', + 'gpu': 'VHostUserBackendCapabilitiesGPU' + } +} + +## +# @VhostUserBackend: +# +# Describes a vhost user backend to management software. +# +# It is possible for multiple @VhostUserBackend elements to match the +# search criteria of management software. Applications thus need rules +# to pick one of the many matches, and users need the ability to +# override distro defaults. +# +# It is recommended to create vhost user backend JSON files (each +# containing a single @VhostUserBackend root element) with a +# double-digit prefix, for example "50-qemu-gpu.json", +# "50-crosvm-gpu.json", etc, so they can be sorted in predictable +# order. The backend JSON files should be searched for in three +# directories: +# +# - /usr/share/qemu/vhost-user -- populated by distro-provided +# packages (XDG_DATA_DIRS covers +# /usr/share by default), +# +# - /etc/qemu/vhost-user -- exclusively for sysadmins' local additions, +# +# - $XDG_CONFIG_HOME/qemu/vhost-user -- exclusively for per-user local +# additions (XDG_CONFIG_HOME +# defaults to $HOME/.config). +# +# Top-down, the list of directories goes from general to specific. +# +# Management software should build a list of files from all three +# locations, then sort the list by filename (i.e., basename +# component). Management software should choose the first JSON file on +# the sorted list that matches the search criteria. If a more specific +# directory has a file with same name as a less specific directory, +# then the file in the more specific directory takes effect. If the +# more specific file is zero length, it hides the less specific one. +# +# For example, if a distro ships +# +# - /usr/share/qemu/vhost-user/50-qemu-gpu.json +# +# - /usr/share/qemu/vhost-user/50-crosvm-gpu.json +# +# then the sysadmin can prevent the default QEMU being used at all with +# +# $ touch /etc/qemu/vhost-user/50-qemu-gpu.json +# +# The sysadmin can replace/alter the distro default OVMF with +# +# $ vim /etc/qemu/vhost-user/50-qemu-gpu.json +# +# or they can provide a parallel QEMU GPU with higher priority +# +# $ vim /etc/qemu/vhost-user/10-qemu-gpu.json +# +# or they can provide a parallel OVMF with lower priority +# +# $ vim /etc/qemu/vhost-user/99-qemu-gpu.json +# +# @type: The vhost user backend type. +# +# @description: Provides a human-readable description of the backend. +# Management software may or may not display @description. +# +# @binary: Absolute path to the backend binary. +# +# @tags: An optional list of auxiliary strings associated with the +# backend for which @description is not appropriate, due to the +# latter's possible exposure to the end-user. @tags serves +# development and debugging purposes only, and management +# software shall explicitly ignore it. +# +# Since: 4.0 +# +# Example: +# +# { +# "description": "QEMU vhost-user-gpu", +# "type": "gpu", +# "binary": "/usr/libexec/qemu/vhost-user-gpu", +# "tags": [ +# "CONFIG_OPENGL_DMABUF=y" +# ] +# } +# +## +{ + 'struct' : 'VhostUserBackend', + 'data' : { + 'description': 'str', + 'type': 'VHostUserBackendType', + 'binary': 'str', + '*tags': [ 'str' ] + } +} diff --git a/docs/interop/vhost-user.txt b/docs/interop/vhost-user.txt index c2194711d9..4dbd530cb9 100644 --- a/docs/interop/vhost-user.txt +++ b/docs/interop/vhost-user.txt @@ -17,8 +17,13 @@ The protocol defines 2 sides of the communication, master and slave. Master is the application that shares its virtqueues, in our case QEMU. Slave is the consumer of the virtqueues. -In the current implementation QEMU is the Master, and the Slave is intended to -be a software Ethernet switch running in user space, such as Snabbswitch. +In the current implementation QEMU is the Master, and the Slave is the +external process consuming the virtio queues, for example a software +Ethernet switch running in user space, such as Snabbswitch, or a block +device backend processing read & write to a virtual disk. In order to +facilitate interoperability between various backend implementations, +it is recommended to follow the "Backend program conventions" +described in this document. Master and slave can be either a client (i.e. connecting) or server (listening) in the socket communication. @@ -142,6 +147,17 @@ Depending on the request type, payload can be: Offset: a 64-bit offset of this area from the start of the supplied file descriptor + * Inflight description + ----------------------------------------------------- + | mmap size | mmap offset | num queues | queue size | + ----------------------------------------------------- + + mmap size: a 64-bit size of area to track inflight I/O + mmap offset: a 64-bit offset of this area from the start + of the supplied file descriptor + num queues: a 16-bit number of virtqueues + queue size: a 16-bit size of virtqueues + In QEMU the vhost-user message is implemented with the following struct: typedef struct VhostUserMsg { @@ -157,6 +173,7 @@ typedef struct VhostUserMsg { struct vhost_iotlb_msg iotlb; VhostUserConfig config; VhostUserVringArea area; + VhostUserInflight inflight; }; } QEMU_PACKED VhostUserMsg; @@ -175,6 +192,7 @@ the ones that do: * VHOST_USER_GET_PROTOCOL_FEATURES * VHOST_USER_GET_VRING_BASE * VHOST_USER_SET_LOG_BASE (if VHOST_USER_PROTOCOL_F_LOG_SHMFD) + * VHOST_USER_GET_INFLIGHT_FD (if VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD) [ Also see the section on REPLY_ACK protocol extension. ] @@ -188,6 +206,7 @@ in the ancillary data: * VHOST_USER_SET_VRING_CALL * VHOST_USER_SET_VRING_ERR * VHOST_USER_SET_SLAVE_REQ_FD + * VHOST_USER_SET_INFLIGHT_FD (if VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD) If Master is unable to send the full message or receives a wrong reply it will close the connection. An optional reconnection mechanism can be implemented. @@ -382,6 +401,256 @@ If VHOST_USER_PROTOCOL_F_SLAVE_SEND_FD protocol feature is negotiated, slave can send file descriptors (at most 8 descriptors in each message) to master via ancillary data using this fd communication channel. +Inflight I/O tracking +--------------------- + +To support reconnecting after restart or crash, slave may need to resubmit +inflight I/Os. If virtqueue is processed in order, we can easily achieve +that by getting the inflight descriptors from descriptor table (split virtqueue) +or descriptor ring (packed virtqueue). However, it can't work when we process +descriptors out-of-order because some entries which store the information of +inflight descriptors in available ring (split virtqueue) or descriptor +ring (packed virtqueue) might be overrided by new entries. To solve this +problem, slave need to allocate an extra buffer to store this information of inflight +descriptors and share it with master for persistent. VHOST_USER_GET_INFLIGHT_FD and +VHOST_USER_SET_INFLIGHT_FD are used to transfer this buffer between master +and slave. And the format of this buffer is described below: + +------------------------------------------------------- +| queue0 region | queue1 region | ... | queueN region | +------------------------------------------------------- + +N is the number of available virtqueues. Slave could get it from num queues +field of VhostUserInflight. + +For split virtqueue, queue region can be implemented as: + +typedef struct DescStateSplit { + /* Indicate whether this descriptor is inflight or not. + * Only available for head-descriptor. */ + uint8_t inflight; + + /* Padding */ + uint8_t padding[5]; + + /* Maintain a list for the last batch of used descriptors. + * Only available when batching is used for submitting */ + uint16_t next; + + /* Used to preserve the order of fetching available descriptors. + * Only available for head-descriptor. */ + uint64_t counter; +} DescStateSplit; + +typedef struct QueueRegionSplit { + /* The feature flags of this region. Now it's initialized to 0. */ + uint64_t features; + + /* The version of this region. It's 1 currently. + * Zero value indicates an uninitialized buffer */ + uint16_t version; + + /* The size of DescStateSplit array. It's equal to the virtqueue + * size. Slave could get it from queue size field of VhostUserInflight. */ + uint16_t desc_num; + + /* The head of list that track the last batch of used descriptors. */ + uint16_t last_batch_head; + + /* Store the idx value of used ring */ + uint16_t used_idx; + + /* Used to track the state of each descriptor in descriptor table */ + DescStateSplit desc[0]; +} QueueRegionSplit; + +To track inflight I/O, the queue region should be processed as follows: + +When receiving available buffers from the driver: + + 1. Get the next available head-descriptor index from available ring, i + + 2. Set desc[i].counter to the value of global counter + + 3. Increase global counter by 1 + + 4. Set desc[i].inflight to 1 + +When supplying used buffers to the driver: + + 1. Get corresponding used head-descriptor index, i + + 2. Set desc[i].next to last_batch_head + + 3. Set last_batch_head to i + + 4. Steps 1,2,3 may be performed repeatedly if batching is possible + + 5. Increase the idx value of used ring by the size of the batch + + 6. Set the inflight field of each DescStateSplit entry in the batch to 0 + + 7. Set used_idx to the idx value of used ring + +When reconnecting: + + 1. If the value of used_idx does not match the idx value of used ring (means + the inflight field of DescStateSplit entries in last batch may be incorrect), + + (a) Subtract the value of used_idx from the idx value of used ring to get + last batch size of DescStateSplit entries + + (b) Set the inflight field of each DescStateSplit entry to 0 in last batch + list which starts from last_batch_head + + (c) Set used_idx to the idx value of used ring + + 2. Resubmit inflight DescStateSplit entries in order of their counter value + +For packed virtqueue, queue region can be implemented as: + +typedef struct DescStatePacked { + /* Indicate whether this descriptor is inflight or not. + * Only available for head-descriptor. */ + uint8_t inflight; + + /* Padding */ + uint8_t padding; + + /* Link to the next free entry */ + uint16_t next; + + /* Link to the last entry of descriptor list. + * Only available for head-descriptor. */ + uint16_t last; + + /* The length of descriptor list. + * Only available for head-descriptor. */ + uint16_t num; + + /* Used to preserve the order of fetching available descriptors. + * Only available for head-descriptor. */ + uint64_t counter; + + /* The buffer id */ + uint16_t id; + + /* The descriptor flags */ + uint16_t flags; + + /* The buffer length */ + uint32_t len; + + /* The buffer address */ + uint64_t addr; +} DescStatePacked; + +typedef struct QueueRegionPacked { + /* The feature flags of this region. Now it's initialized to 0. */ + uint64_t features; + + /* The version of this region. It's 1 currently. + * Zero value indicates an uninitialized buffer */ + uint16_t version; + + /* The size of DescStatePacked array. It's equal to the virtqueue + * size. Slave could get it from queue size field of VhostUserInflight. */ + uint16_t desc_num; + + /* The head of free DescStatePacked entry list */ + uint16_t free_head; + + /* The old head of free DescStatePacked entry list */ + uint16_t old_free_head; + + /* The used index of descriptor ring */ + uint16_t used_idx; + + /* The old used index of descriptor ring */ + uint16_t old_used_idx; + + /* Device ring wrap counter */ + uint8_t used_wrap_counter; + + /* The old device ring wrap counter */ + uint8_t old_used_wrap_counter; + + /* Padding */ + uint8_t padding[7]; + + /* Used to track the state of each descriptor fetched from descriptor ring */ + DescStatePacked desc[0]; +} QueueRegionPacked; + +To track inflight I/O, the queue region should be processed as follows: + +When receiving available buffers from the driver: + + 1. Get the next available descriptor entry from descriptor ring, d + + 2. If d is head descriptor, + + (a) Set desc[old_free_head].num to 0 + + (b) Set desc[old_free_head].counter to the value of global counter + + (c) Increase global counter by 1 + + (d) Set desc[old_free_head].inflight to 1 + + 3. If d is last descriptor, set desc[old_free_head].last to free_head + + 4. Increase desc[old_free_head].num by 1 + + 5. Set desc[free_head].addr, desc[free_head].len, desc[free_head].flags, + desc[free_head].id to d.addr, d.len, d.flags, d.id + + 6. Set free_head to desc[free_head].next + + 7. If d is last descriptor, set old_free_head to free_head + +When supplying used buffers to the driver: + + 1. Get corresponding used head-descriptor entry from descriptor ring, d + + 2. Get corresponding DescStatePacked entry, e + + 3. Set desc[e.last].next to free_head + + 4. Set free_head to the index of e + + 5. Steps 1,2,3,4 may be performed repeatedly if batching is possible + + 6. Increase used_idx by the size of the batch and update used_wrap_counter if needed + + 7. Update d.flags + + 8. Set the inflight field of each head DescStatePacked entry in the batch to 0 + + 9. Set old_free_head, old_used_idx, old_used_wrap_counter to free_head, used_idx, + used_wrap_counter + +When reconnecting: + + 1. If used_idx does not match old_used_idx (means the inflight field of DescStatePacked + entries in last batch may be incorrect), + + (a) Get the next descriptor ring entry through old_used_idx, d + + (b) Use old_used_wrap_counter to calculate the available flags + + (c) If d.flags is not equal to the calculated flags value (means slave has + submitted the buffer to guest driver before crash, so it has to commit the + in-progres update), set old_free_head, old_used_idx, old_used_wrap_counter + to free_head, used_idx, used_wrap_counter + + 2. Set free_head, used_idx, used_wrap_counter to old_free_head, old_used_idx, + old_used_wrap_counter (roll back any in-progress update) + + 3. Set the inflight field of each DescStatePacked entry in free list to 0 + + 4. Resubmit inflight DescStatePacked entries in order of their counter value + Protocol features ----------------- @@ -397,6 +666,7 @@ Protocol features #define VHOST_USER_PROTOCOL_F_CONFIG 9 #define VHOST_USER_PROTOCOL_F_SLAVE_SEND_FD 10 #define VHOST_USER_PROTOCOL_F_HOST_NOTIFIER 11 +#define VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD 12 Master message types -------------------- @@ -761,6 +1031,26 @@ Master message types was previously sent. The value returned is an error indication; 0 is success. + * VHOST_USER_GET_INFLIGHT_FD + Id: 31 + Equivalent ioctl: N/A + Master payload: inflight description + + When VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD protocol feature has been + successfully negotiated, this message is submitted by master to get + a shared buffer from slave. The shared buffer will be used to track + inflight I/O by slave. QEMU should retrieve a new one when vm reset. + + * VHOST_USER_SET_INFLIGHT_FD + Id: 32 + Equivalent ioctl: N/A + Master payload: inflight description + + When VHOST_USER_PROTOCOL_F_INFLIGHT_SHMFD protocol feature has been + successfully negotiated, this message is submitted by master to send + the shared inflight buffer back to slave so that slave could get + inflight I/O after a crash or restart. + Slave message types ------------------- @@ -835,3 +1125,95 @@ resilient for selective requests. For the message types that already solicit a reply from the client, the presence of VHOST_USER_PROTOCOL_F_REPLY_ACK or need_reply bit being set brings no behavioural change. (See the 'Communication' section for details.) + +Backend program conventions +--------------------------- + +vhost-user backends can provide various devices & services and may +need to be configured manually depending on the use case. However, it +is a good idea to follow the conventions listed here when +possible. Users, QEMU or libvirt, can then rely on some common +behaviour to avoid heterogenous configuration and management of the +backend programs and facilitate interoperability. + +Each backend installed on a host system should come with at least one +JSON file that conforms to the vhost-user.json schema. Each file +informs the management applications about the backend type, and binary +location. In addition, it defines rules for management apps for +picking the highest priority backend when multiple match the search +criteria (see @VhostUserBackend documentation in the schema file). + +If the backend is not capable of enabling a requested feature on the +host (such as 3D acceleration with virgl), or the initialization +failed, the backend should fail to start early and exit with a status +!= 0. It may also print a message to stderr for further details. + +The backend program must not daemonize itself, but it may be +daemonized by the management layer. It may also have a restricted +access to the system. + +File descriptors 0, 1 and 2 will exist, and have regular +stdin/stdout/stderr usage (they may have been redirected to /dev/null +by the management layer, or to a log handler). + +The backend program must end (as quickly and cleanly as possible) when +the SIGTERM signal is received. Eventually, it may receive SIGKILL by +the management layer after a few seconds. + +The following command line options have an expected behaviour. They +are mandatory, unless explicitly said differently: + +* --socket-path=PATH + +This option specify the location of the vhost-user Unix domain socket. +It is incompatible with --fd. + +* --fd=FDNUM + +When this argument is given, the backend program is started with the +vhost-user socket as file descriptor FDNUM. It is incompatible with +--socket-path. + +* --print-capabilities + +Output to stdout the backend capabilities in JSON format, and then +exit successfully. Other options and arguments should be ignored, and +the backend program should not perform its normal function. The +capabilities can be reported dynamically depending on the host +capabilities. + +The JSON output is described in the vhost-user.json schema, by +@VHostUserBackendCapabilities. Example: +{ + "type": "foo", + "features": [ + "feature-a", + "feature-b" + ] +} + +vhost-user-input +---------------- + +Command line options: + +* --evdev-path=PATH (optional) + +Specify the linux input device. + +* --no-grab (optional) + +Do no request exclusive access to the input device. + +vhost-user-gpu +-------------- + +Command line options: + +* --render-node=PATH (optional) + +Specify the GPU DRM render node. + +* --virgl (optional) + +Enable virgl rendering support. diff --git a/docs/nvdimm.txt b/docs/nvdimm.txt index 7231c2d78f..b531cacd35 100644 --- a/docs/nvdimm.txt +++ b/docs/nvdimm.txt @@ -144,9 +144,25 @@ Guest Data Persistence ---------------------- Though QEMU supports multiple types of vNVDIMM backends on Linux, -currently the only one that can guarantee the guest write persistence -is the device DAX on the real NVDIMM device (e.g., /dev/dax0.0), to -which all guest access do not involve any host-side kernel cache. +the only backend that can guarantee the guest write persistence is: + +A. DAX device (e.g., /dev/dax0.0, ) or +B. DAX file(mounted with dax option) + +When using B (A file supporting direct mapping of persistent memory) +as a backend, write persistence is guaranteed if the host kernel has +support for the MAP_SYNC flag in the mmap system call (available +since Linux 4.15 and on certain distro kernels) and additionally +both 'pmem' and 'share' flags are set to 'on' on the backend. + +If these conditions are not satisfied i.e. if either 'pmem' or 'share' +are not set, if the backend file does not support DAX or if MAP_SYNC +is not supported by the host kernel, write persistence is not +guaranteed after a system crash. For compatibility reasons, these +conditions are ignored if not satisfied. Currently, no way is +provided to test for them. +For more details, please reference mmap(2) man page: +http://man7.org/linux/man-pages/man2/mmap.2.html. When using other types of backends, it's suggested to set 'unarmed' option of '-device nvdimm' to 'on', which sets the unarmed flag of the diff --git a/docs/qemu-cpu-models.texi b/docs/qemu-cpu-models.texi index 1b72584161..23c11dc86f 100644 --- a/docs/qemu-cpu-models.texi +++ b/docs/qemu-cpu-models.texi @@ -158,8 +158,7 @@ support this feature. @item @code{spec-ctrl} -Required to enable the Spectre (CVE-2017-5753 and CVE-2017-5715) fix, -in cases where retpolines are not sufficient. +Required to enable the Spectre v2 (CVE-2017-5715) fix. Included by default in Intel CPU models with -IBRS suffix. @@ -169,6 +168,17 @@ Requires the host CPU microcode to support this feature before it can be used for guest CPUs. +@item @code{stibp} + +Required to enable stronger Spectre v2 (CVE-2017-5715) fixes in some +operating systems. + +Must be explicitly turned on for all Intel CPU models. + +Requires the host CPU microcode to support this feature before it +can be used for guest CPUs. + + @item @code{ssbd} Required to enable the CVE-2018-3639 fix @@ -249,8 +259,7 @@ included if using "Host passthrough" or "Host model". @item @code{ibpb} -Required to enable the Spectre (CVE-2017-5753 and CVE-2017-5715) fix, -in cases where retpolines are not sufficient. +Required to enable the Spectre v2 (CVE-2017-5715) fix. Included by default in AMD CPU models with -IBPB suffix. @@ -260,6 +269,17 @@ Requires the host CPU microcode to support this feature before it can be used for guest CPUs. +@item @code{stibp} + +Required to enable stronger Spectre v2 (CVE-2017-5715) fixes in some +operating systems. + +Must be explicitly turned on for all AMD CPU models. + +Requires the host CPU microcode to support this feature before it +can be used for guest CPUs. + + @item @code{virt-ssbd} Required to enable the CVE-2018-3639 fix |