]> xenbits.xensource.com Git - people/dwmw2/xen.git/commitdiff
xen/arm: Enforce alignment check in debug build for {read, write}_atomic
authorAyan Kumar Halder <ayankuma@amd.com>
Tue, 8 Nov 2022 09:45:03 +0000 (09:45 +0000)
committerJulien Grall <jgrall@amazon.com>
Tue, 6 Dec 2022 18:19:50 +0000 (18:19 +0000)
Xen provides helper to atomically read/write memory (see {read,
write}_atomic()). Those helpers can only work if the address is aligned
to the size of the access (see B2.2.1 ARM DDI 08476I.a).

On Arm32, the alignment is already enforced by the processor because
HSCTLR.A bit is set (it enforce alignment for every access). For Arm64,
this bit is not set because memcpy()/memset() can use unaligned access
for performance reason (the implementation is taken from the Cortex
library).

To avoid any overhead in production build, the alignment will only be
checked using an ASSERT. Note that it might be possible to do it in
production build using the acquire/exclusive version of load/store. But
this is left to a follow-up (if wanted).

Signed-off-by: Ayan Kumar Halder <ayan.kumar.halder@amd.com>
Signed-off-by: Julien Grall <julien@xen.org>
Reviewed-by: Michal Orzel <michal.orzel@amd.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
xen/arch/arm/include/asm/atomic.h

index 1f60c28b1bdfd3d4052bef8f10fb089ff1f6b559..64314d59b3b0d841408dec07df51340ad4bbee24 100644 (file)
@@ -78,6 +78,7 @@ static always_inline void read_atomic_size(const volatile void *p,
                                            void *res,
                                            unsigned int size)
 {
+    ASSERT(IS_ALIGNED((vaddr_t)p, size));
     switch ( size )
     {
     case 1:
@@ -102,6 +103,7 @@ static always_inline void write_atomic_size(volatile void *p,
                                             void *val,
                                             unsigned int size)
 {
+    ASSERT(IS_ALIGNED((vaddr_t)p, size));
     switch ( size )
     {
     case 1: