* reloc - telemetries contained in the payload to construct proper trampoline.
* hook - an auxiliary function being called before, during or after payload
application or revert.
- * quiescing zone - period when all CPUs are lock-step with each other.
+ * quiescent zone - period when all CPUs are lock-step with each other.
## History
* Relocations for each of these sections.
The Xen Live Patch core code loads the payload as a standard ELF binary, relocates it
-and handles the architecture-specifc sections as needed. This process is much
+and handles the architecture-specific sections as needed. This process is much
like what the Linux kernel module loader does.
The payload contains at least three sections:
to `old_addr`.
It optionally may contain the address of hooks to be called right before
-being applied and after being reverted (while all CPUs are still in quiescing
+being applied and after being reverted (while all CPUs are still in quiescent
zone). These hooks do not have access to payload structure.
* `.livepatch.hooks.load` - an array of function pointers.
It optionally may also contain the address of pre- and post- vetoing hooks to
be called before (pre) or after (post) apply and revert payload actions (while
-all CPUs are already released from quiescing zone). These hooks do have
+all CPUs are already released from quiescent zone). These hooks do have
access to payload structure. The pre-apply hook can prevent from loading the
payload if encoded in it condition is not met. Accordingly, the pre-revert
hook can prevent from unloading the livepatch if encoded in it condition is not
Finally, it optionally may also contain the address of apply or revert action
hooks to be called instead of the default apply and revert payload actions
-(while all CPUs are kept in quiescing zone). These hooks do have access to
+(while all CPUs are kept in quiescent zone). These hooks do have access to
payload structure.
* `.livepatch.hooks.{apply,revert}`
This section contains a pointer to a single function pointer to be executed
before apply action is scheduled (and thereby before CPUs are put into
-quiescing zone). This is useful to prevent from applying a payload when
+quiescent zone). This is useful to prevent from applying a payload when
certain expected conditions aren't met or when mutating actions implemented
in the hook fail or cannot be executed.
This type of hooks do have access to payload structure.
#### .livepatch.hooks.postapply
This section contains a pointer to a single function pointer to be executed
-after apply action has finished and after all CPUs left the quiescing zone.
+after apply action has finished and after all CPUs left the quiescent zone.
This is useful to provide an ability to follow up on actions performed by
the preapply hook. Especially, when module application was successful or to
be able to undo certain preparation steps of the preapply hook in case of a
This section contains a pointer to a single function pointer to be executed
before revert action is scheduled (and thereby before CPUs are put into
-quiescing zone). This is useful to prevent from reverting a payload when
+quiescent zone). This is useful to prevent from reverting a payload when
certain expected conditions aren't met or when mutating actions implemented
in the hook fail or cannot be executed.
This type of hooks do have access to payload structure.
#### .livepatch.hooks.postrevert
This section contains a pointer to a single function pointer to be executed
-after revert action has finished and after all CPUs left the quiescing zone.
+after revert action has finished and after all CPUs left the quiescent zone.
This is useful to provide an ability to perform cleanup of all previously
executed mutating actions in order to restore the original system state from
before the current payload application. The success/failure error code is
This section contains a pointer to a single function pointer to be executed
instead of a default apply (or revert) action function. This is useful to
replace or augment default behavior of the apply (or revert) action that
-requires all CPUs to be in the quiescing zone.
+requires all CPUs to be in the quiescent zone.
This type of hooks do have access to payload structure.
Each entry in this array is eight bytes.
### .livepatch.xen_depends, .livepatch.depends and .note.gnu.build-id
To support dependencies checking and safe loading (to load the
-appropiate payload against the right hypervisor) there is a need
-to embbed an build-id dependency.
+appropriate payload against the right hypervisor) there is a need
+to embed a build-id dependency.
This is done by the payload containing sections `.livepatch.xen_depends`
and `.livepatch.depends` which follow the format of an ELF Note.
The contents of these (name, and description) are specific to the linker
-utilized to build the hypevisor and payload.
+utilized to build the hypervisor and payload.
If GNU linker is used then the name is `GNU` and the description
is a NT_GNU_BUILD_ID type ID. The description can be an SHA1
payload. It can be embedded into the ELF payload at creation time
and extracted by tools.
-The return value is zero if the payload was succesfully uploaded.
+The return value is zero if the payload was successfully uploaded.
Otherwise an -XEN_EXX return value is provided. Duplicate `name` are not supported.
The `payload` is the ELF payload as mentioned in the `Payload format` section.
* `cmd` The command requested:
* *LIVEPATCH_ACTION_UNLOAD* (1) Unload the payload.
Any further hypercalls against the `name` will result in failure unless
- **XEN_SYSCTL_LIVEPATCH_UPLOAD** hypercall is perfomed with same `name`.
+ **XEN_SYSCTL_LIVEPATCH_UPLOAD** hypercall is performed with same `name`.
* *LIVEPATCH_ACTION_REVERT* (2) Revert the payload. If the operation takes
more time than the upper bound of time the `rc` in `xen_livepatch_status`
retrieved via **XEN_SYSCTL_LIVEPATCH_GET** will be -XEN_EBUSY.
This is a good spot because all Xen stacks are effectively empty at
that point.
-To randezvous all the CPUs an barrier with an maximum timeout (which
+To rendezvous all the CPUs an barrier with an maximum timeout (which
could be adjusted), combined with forcing all other CPUs through the
hypervisor with IPIs, can be utilized to execute lockstep instructions
on all CPUs.
The design of that is not discussed in this design.
-This is implemented in a seperate tool which lives in a seperate
+This is implemented in a separate tool which lives in a separate
GIT repo.
Currently it resides at https://xenbits.xen.org/git-http/livepatch-build-tools.git
grows to accumulate all the code changes.
* Hotpatch stack - where an mechanism exists that loads the hotpatches
in the same order they were built in. We would need an build-id
- of the hypevisor to make sure the hot-patches are build against the
+ of the hypervisor to make sure the hot-patches are build against the
correct build.
* Payload containing the old code to check against that. That allows
- the hotpatches to be loaded indepedently (if they don't overlap) - or
- if the old code also containst previously patched code - even if they
+ the hotpatches to be loaded independently (if they don't overlap) - or
+ if the old code also contains previously patched code - even if they
overlap.
The disadvantage of the first large patch is that it can grow over
time and not provide an bisection mechanism to identify faulty patches.
-The hot-patch stack puts stricts requirements on the order of the patches
+The hot-patch stack puts strict requirements on the order of the patches
being loaded and requires an hypervisor build-id to match against.
The old code allows much more flexibility and an additional guard,
Please note there is a small limitation for trampolines in
function entries: The target function (+ trailing padding) must be able
-to accomodate the trampoline. On x86 with +-2 GB relative jumps,
+to accommodate the trampoline. On x86 with +-2 GB relative jumps,
this means 5 bytes are required which means that `old_size` **MUST** be
at least five bytes if patching in trampoline.
# Background and Motivation
-At the Xen hackaton '16 networking session, we spoke about having a permanently
+At the Xen hackathon '16 networking session, we spoke about having a permanently
mapped region to describe header/linear region of packet buffers. This document
outlines the proposal covering motivation of this and applicability for other
use-cases alongside the necessary changes.
17) Allocate packet metadata
-[ *Linux specific*: This structure emcompasses a linear data region which
-generally accomodates the protocol header and such. Netback allocates up to 128
+[ *Linux specific*: This structure encompasses a linear data region which
+generally accommodates the protocol header and such. Netback allocates up to 128
bytes for that. ]
18) *Linux specific*: Setup up a `GNTTABOP_copy` to copy up to 128 bytes to this small
process the actual like the steps below. This thread has the purpose of
aggregating as much copies as possible.]
-2) Checks if there are enough rx ring slots that can accomodate the packet.
+2) Checks if there are enough rx ring slots that can accommodate the packet.
3) Gets a request from the ring for the first data slot and fetches the `gref`
from it.
24) Call packet into the network stack.
-25) Allocate new pages and any necessary packet metadata strutures to new
+25) Allocate new pages and any necessary packet metadata structures to new
requests. These requests will then be used in step 1) and so forth.
26) Update the request producer index (`req_prod`)
This proposal aims at replacing step 4), 12) and 22) with memcpy if the
grefs on the Rx ring were requested to be mapped by the guest. Frontend may use
-strategies to allow fast recycling of grants for replinishing the ring,
+strategies to allow fast recycling of grants for replenishing the ring,
hence letting Domain-0 replace the grant copies with memcpy instead, which is
faster.
transmit the packet on the transmit function (e.g. Linux ```ndo_start_xmit```)
as previously proposed
here\[[0](http://lists.xenproject.org/archives/html/xen-devel/2015-05/msg01504.html)\].
-This would heavily improve efficiency specifially for smaller packets. Which in
-return would decrease RTT, having data being acknoledged much quicker.
+This would heavily improve efficiency specifically for smaller packets. Which in
+return would decrease RTT, having data being acknowledged much quicker.
\clearpage
Control ring is only available after backend state is `XenbusConnected`
therefore only on this state change can the frontend query the total amount of
maps it can keep. It then grants N entries per queue on both TX and RX ring
-which will create the underying backend gref -> page association (e.g. stored
+which will create the underlying backend gref -> page association (e.g. stored
in hash table). Frontend may wish to recycle these pregranted buffers or choose
a copy approach to replace granting.
On steps 19) of Guest Transmit and 3) of Guest Receive, data gref is first
looked up in this table and uses the underlying page if it already exists a
-mapping. On the successfull cases, steps 20) 21) and 27) of Guest Transmit are
+mapping. On the successful cases, steps 20) 21) and 27) of Guest Transmit are
skipped, with 19) being replaced with a memcpy of up to 128 bytes. On Guest
Receive, 4) 12) and 22) are replaced with memcpy instead of a grant copy.