Reviewed-by: Harsh Prateek Bora <harshpb@linux.ibm.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20230515195709.63843-6-quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
migration: process_incoming_migration_co(): move colo part to colo
Let's make better public interface for COLO: instead of
colo_process_incoming_thread and not trivial logic around creating the
thread let's make simple colo_incoming_co(), hiding implementation from
generic code.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20230515130640.46035-4-vsementsov@yandex-team.ru> Signed-off-by: Juan Quintela <quintela@redhat.com>
Originally, migration_incoming_co was introduced by 25d0c16f625feb3b6
"migration: Switch to COLO process after finishing loadvm"
to be able to enter from COLO code to one specific yield point, added
by 25d0c16f625feb3b6.
Later in 923709896b1b0
"migration: poll the cm event for destination qemu"
we reused this variable to wake the migration incoming coroutine from
RDMA code.
That was doubtful idea. Entering coroutines is a very fragile thing:
you should be absolutely sure which yield point you are going to enter.
I don't know how much is it safe to enter during qemu_loadvm_state()
which I think what RDMA want to do. But for sure RDMA shouldn't enter
the special COLO-related yield-point. As well, COLO code doesn't want
to enter during qemu_loadvm_state(), it want to enter it's own specific
yield-point.
As well, when in 8e48ac95865ac97d
"COLO: Add block replication into colo process" we added
bdrv_invalidate_cache_all() call (now it's called activate_all())
it became possible to enter the migration incoming coroutine during
that call which is wrong too.
So, let't make these things separate and disjoint: loadvm_co for RDMA,
non-NULL during qemu_loadvm_state(), and colo_incoming_co for COLO,
non-NULL only around specific yield.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Reviewed-by: Juan Quintela <quintela@redhat.com>
Message-Id: <20230515130640.46035-3-vsementsov@yandex-team.ru> Signed-off-by: Juan Quintela <quintela@redhat.com>
Merge tag 'pull-target-arm-20230518' of https://git.linaro.org/people/pmaydell/qemu-arm into staging
target-arm queue:
* Fix vd == vm overlap in sve_ldff1_z
* Add support for MTE with KVM guests
* Add RAZ/WI handling for DBGDTR[TX|RX]
* Start of conversion of A64 decoder to decodetree
* Saturate L2CTLR_EL1 core count field rather than overflowing
* vexpress: Avoid trivial memory leak of 'flashalias'
* sbsa-ref: switch default cpu core to Neoverse-N1
* sbsa-ref: use Bochs graphics card instead of VGA
* MAINTAINERS: Add Marcin Juszkiewicz to sbsa-ref reviewer list
* docs: Convert u2f.txt to rST
Peter Maydell [Fri, 21 Apr 2023 16:37:34 +0000 (17:37 +0100)]
docs: Convert u2f.txt to rST
Convert the u2f.txt file to rST, and place it in the right place
in our manual layout. The old text didn't fit very well into our
manual style, so the new version ends up looking like a rewrite,
although some of the original text is preserved:
* the 'building' section of the old file is removed, since we
generally assume that users have already built QEMU
* some rather verbose text has been cut back
* document the passthrough device first, on the assumption
that's most likely to be of interest to users
* cut back on the duplication of text between sections
* format example command lines etc with rST
As it's a short document it seemed simplest to do this all
in one go rather than try to do a minimal syntactic conversion
and then clean up the wording and layout.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-id: 20230421163734.1152076-1-peter.maydell@linaro.org
Peter Maydell [Fri, 12 May 2023 17:02:23 +0000 (18:02 +0100)]
hw/arm/vexpress: Avoid trivial memory leak of 'flashalias'
In the vexpress board code, we allocate a new MemoryRegion at the top
of vexpress_common_init() but only set it up and use it inside the
"if (map[VE_NORFLASHALIAS] != -1)" conditional, so we leak it if not.
This isn't a very interesting leak as it's a tiny amount of memory
once at startup, but it's easy to fix.
We could silence Coverity simply by moving the g_new() into the
if() block, but this use of g_new(MemoryRegion, 1) is a legacy from
when this board model was originally written; we wouldn't do that
if we wrote it today. The MemoryRegions are conceptually a part of
the board and must not go away until the whole board is done with
(at the end of the simulation), so they belong in its state struct.
This machine already has a VexpressMachineState struct that extends
MachineState, so statically put the MemoryRegions in there instead of
dynamically allocating them separately at runtime.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20230512170223.3801643-3-peter.maydell@linaro.org
Peter Maydell [Fri, 12 May 2023 17:02:22 +0000 (18:02 +0100)]
target/arm: Saturate L2CTLR_EL1 core count field rather than overflowing
The IMPDEF sysreg L2CTLR_EL1 found on the Cortex-A35, A53, A57, A72
and which we (arguably dubiously) also provide in '-cpu max' has a
2 bit field for the number of processors in the cluster. On real
hardware this must be sufficient because it can only be configured
with up to 4 CPUs in the cluster. However on QEMU if the board code
does not explicitly configure the code into clusters with the right
CPU count we default to "give the value assuming that all CPUs in
the system are in a single cluster", which might be too big to fit
in the field.
Instead of just overflowing this 2-bit field, saturate to 3 (meaning
"4 CPUs", so at least we don't overwrite other fields in the register.
It's unlikely that any guest code really cares about the value in
this field; at least, if it does it probably also wants the system
to be more closely matching real hardware, i.e. not to have more
than 4 CPUs.
This issue has been present since the L2CTLR was first added in
commit 377a44ec8f2fac5b back in 2014. It was only noticed because
Coverity complains (CID 1509227) that the shift might overflow 32 bits
and inadvertently sign extend into the top half of the 64 bit value.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230512170223.3801643-2-peter.maydell@linaro.org
Peter Maydell [Fri, 12 May 2023 14:41:06 +0000 (15:41 +0100)]
target/arm: Convert ERET, ERETAA, ERETAB to decodetree
Convert the exception-return insns ERET, ERETA and ERETB to
decodetree. These were the last insns left in the legacy
decoder function disas_uncond_reg_b(), which allows us to
remove it.
The old decoder explicitly decoded the DRPS instruction,
only in order to call unallocated_encoding() on it, exactly
as would have happened if it hadn't decoded it. This is
because this insn always UNDEFs unless the CPU is in
halting-debug state, which we don't emulate. So we list
the pattern in a comment in a64.decode, but don't actively
decode it.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230512144106.3608981-21-peter.maydell@linaro.org
Peter Maydell [Fri, 12 May 2023 14:41:05 +0000 (15:41 +0100)]
target/arm: Convert BRAA, BRAB, BLRAA, BLRAB to decodetree
Convert the last four BR-with-pointer-auth insns to decodetree.
The remaining cases in the outer switch in disas_uncond_b_reg()
all return early rather than leaving the case statement, so we
can delete the now-unused code at the end of that function.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230512144106.3608981-20-peter.maydell@linaro.org
Peter Maydell [Fri, 12 May 2023 14:41:04 +0000 (15:41 +0100)]
target/arm: Convert BRA[AB]Z, BLR[AB]Z, RETA[AB] to decodetree
Convert the single-register pointer-authentication variants of BR,
BLR, RET to decodetree. (BRAA/BLRAA are in a different branch of
the legacy decoder and will be dealt with in the next commit.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230512144106.3608981-19-peter.maydell@linaro.org
Peter Maydell [Fri, 12 May 2023 14:41:03 +0000 (15:41 +0100)]
target/arm: Convert BR, BLR, RET to decodetree
Convert the simple (non-pointer-auth) BR, BLR and RET insns
to decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230512144106.3608981-18-peter.maydell@linaro.org
Peter Maydell [Fri, 12 May 2023 14:41:02 +0000 (15:41 +0100)]
target/arm: Convert conditional branch insns to decodetree
Convert the immediate conditional branch insn B.cond to
decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230512144106.3608981-17-peter.maydell@linaro.org
Peter Maydell [Fri, 12 May 2023 14:41:01 +0000 (15:41 +0100)]
target/arm: Convert TBZ, TBNZ to decodetree
Convert the test-and-branch-immediate insns TBZ and TBNZ
to decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230512144106.3608981-16-peter.maydell@linaro.org
Peter Maydell [Fri, 12 May 2023 14:41:00 +0000 (15:41 +0100)]
target/arm: Convert CBZ, CBNZ to decodetree
Convert the compare-and-branch-immediate insns CBZ and CBNZ
to decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230512144106.3608981-15-peter.maydell@linaro.org
Peter Maydell [Fri, 12 May 2023 14:40:59 +0000 (15:40 +0100)]
target/arm: Convert unconditional branch immediate to decodetree
Convert the unconditional branch immediate insns B and BL to
decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230512144106.3608981-14-peter.maydell@linaro.org
Peter Maydell [Fri, 12 May 2023 14:40:58 +0000 (15:40 +0100)]
target/arm: Convert Extract instructions to decodetree
Convert the EXTR instruction to decodetree (this is the
only one in the 'Extract" class). This is the last of
the dp-immediate insns in the legacy decoder, so we
can now remove disas_data_proc_imm().
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230512144106.3608981-13-peter.maydell@linaro.org
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230512144106.3608981-12-peter.maydell@linaro.org
[PMM: Rebased] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
target/arm: Convert Move wide (immediate) to decodetree
Convert the MON, MOVZ, MOVK instructions.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230512144106.3608981-11-peter.maydell@linaro.org
[PMM: Rebased] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
target/arm: Convert Logical (immediate) to decodetree
Convert the ADD, ORR, EOR, ANDS (immediate) instructions.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230512144106.3608981-10-peter.maydell@linaro.org
[PMM: rebased] Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
target/arm: Replace bitmask64 with MAKE_64BIT_MASK
Use the bitops.h macro rather than rolling our own here.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230512144106.3608981-9-peter.maydell@linaro.org
target/arm: Convert Add/subtract (immediate with tags) to decodetree
Convert the ADDG and SUBG (immediate) instructions.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230512144106.3608981-8-peter.maydell@linaro.org
[PMM: Rebased; use TRANS_FEAT()] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
target/arm: Convert Add/subtract (immediate) to decodetree
Convert the ADD and SUB (immediate) instructions.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230512144106.3608981-7-peter.maydell@linaro.org
[PMM: Rebased; adjusted to use translate.h's TRANS macro] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Split out specific 32-bit and 64-bit functions.
These carry the same signature as tcg_gen_add_i64,
and so will be easier to pass as callbacks.
Retain gen_add_CC and gen_sub_CC during conversion.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230512144106.3608981-6-peter.maydell@linaro.org
[PMM: rebased] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
target/arm: Convert PC-rel addressing to decodetree
Convert the ADR and ADRP instructions.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230512144106.3608981-5-peter.maydell@linaro.org
[PMM: Rebased] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Peter Maydell [Fri, 12 May 2023 14:40:49 +0000 (15:40 +0100)]
target/arm: Pull calls to disas_sve() and disas_sme() out of legacy decoder
The SVE and SME decode is already done by decodetree. Pull the calls
to these decoders out of the legacy decoder. This doesn't change
behaviour because all the patterns in sve.decode and sme.decode
already require the bits that the legacy decoder is decoding to have
the correct values.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230512144106.3608981-4-peter.maydell@linaro.org
Peter Maydell [Fri, 12 May 2023 14:40:48 +0000 (15:40 +0100)]
target/arm: Create decodetree skeleton for A64
The A64 translator uses a hand-written decoder for everything except
SVE or SME. It's fairly well structured, but it's becoming obvious
that it's still more painful to add instructions to than the A32
translator, because putting a new instruction into the right place in
a hand-written decoder is much harder than adding new instruction
patterns to a decodetree file.
As the first step in conversion to decodetree, create the skeleton of
the decodetree decoder; where it does not handle instructions we will
fall back to the legacy decoder (which will be for everything at the
moment, since there are no patterns in a64.decode).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230512144106.3608981-3-peter.maydell@linaro.org
Split out all of the decode stuff from aarch64_tr_translate_insn.
Call it disas_a64_legacy to indicate it will be replaced.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20230512144106.3608981-2-peter.maydell@linaro.org
[PMM: Rebased] Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Alex Bennée [Tue, 16 May 2023 10:44:20 +0000 (11:44 +0100)]
target/arm: add RAZ/WI handling for DBGDTR[TX|RX]
The commit b3aa2f2128 (target/arm: provide stubs for more external
debug registers) was added to handle HyperV's unconditional usage of
Debug Communications Channel. It turns out that Linux will similarly
break if you enable CONFIG_HVC_DCC "ARM JTAG DCC console".
Extend the registers we RAZ/WI set to avoid this.
Cc: Anders Roxell <anders.roxell@linaro.org> Cc: Evgeny Iakovlev <eiakovlev@linux.microsoft.com> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230516104420.407912-1-alex.bennee@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Extend the 'mte' property for the virt machine to cover KVM as
well. For KVM, we don't allocate tag memory, but instead enable the
capability.
If MTE has been enabled, we need to disable migration, as we do not
yet have a way to migrate the tags as well. Therefore, MTE will stay
off with KVM unless requested explicitly.
Signed-off-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20230428095533.21747-2-cohuck@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
At Linaro I work on sbsa-ref, know direction it goes.
May not get code details each time.
Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-id: 20230515143753.365591-1-marcin.juszkiewicz@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Two type hints fail centos-stream-8-x86_64 CI. They are actually
broken. Changing them to Optional[re.Match[str]] fixes them locally
for me, but then CI fails differently. Drop them for now.
Fixes: 3e32dca3f0d1 (qapi: Rewrite parsing of doc comment section symbols and tags) Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20230517061600.1782455-1-armbru@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Igor Mammedov <imammedo@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Michael Tokarev [Sun, 9 Apr 2023 10:53:27 +0000 (13:53 +0300)]
linux-user: fix getgroups/setgroups allocations
linux-user getgroups(), setgroups(), getgroups32() and setgroups32()
used alloca() to allocate grouplist arrays, with unchecked gidsetsize
coming from the "guest". With NGROUPS_MAX being 65536 (linux, and it
is common for an application to allocate NGROUPS_MAX for getgroups()),
this means a typical allocation is half the megabyte on the stack.
Which just overflows stack, which leads to immediate SIGSEGV in actual
system getgroups() implementation.
An example of such issue is aptitude, eg
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=811087#72
Cap gidsetsize to NGROUPS_MAX (return EINVAL if it is larger than that),
and use heap allocation for grouplist instead of alloca(). While at it,
fix coding style and make all 4 implementations identical.
Try to not impose random limits - for example, allow gidsetsize to be
negative for getgroups() - just do not allocate negative-sized grouplist
in this case but still do actual getgroups() call. But do not allow
negative gidsetsize for setgroups() since its argument is unsigned.
Capping by NGROUPS_MAX seems a bit arbitrary, - we can do more, it is
not an error if set size will be NGROUPS_MAX+1. But we should not allow
integer overflow for the array being allocated. Maybe it is enough to
just call g_try_new() and return ENOMEM if it fails.
Maybe there's also no need to convert setgroups() since this one is
usually smaller and known beforehand (KERN_NGROUPS_MAX is actually 63, -
this is apparently a kernel-imposed limit for runtime group set).
The patch fixes aptitude segfault mentioned above.
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Message-Id: <20230409105327.1273372-1-mjt@msgid.tls.msk.ru> Signed-off-by: Laurent Vivier <laurent@vivier.eu>
If a program requires fr1, we should set the FR bit of CP0 control status
register and add F64 hardware flag. The corresponding `else if` branch
statement is copied from the linux kernel sources (see `arch_check_elf` function
in linux/arch/mips/kernel/elf.c).
Thomas Weißschuh [Sat, 22 Apr 2023 10:03:14 +0000 (12:03 +0200)]
linux-user: Don't require PROT_READ for mincore
The kernel does not require PROT_READ for addresses passed to mincore.
For example the fincore(1) tool from util-linux uses PROT_NONE and
currently does not work under qemu-user.
Thomas Weißschuh [Mon, 24 Apr 2023 15:34:29 +0000 (17:34 +0200)]
linux-user: Add open_tree() syscall
Signed-off-by: Thomas Weißschuh <thomas@t-8ch.de> Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-Id: <20230424153429.276788-2-thomas@t-8ch.de>
[lv: move declaration at the beginning of the block,
define syscall] Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Thomas Weißschuh [Wed, 26 Apr 2023 07:06:59 +0000 (09:06 +0200)]
linux-user: report ENOTTY for unknown ioctls
The correct error number for unknown ioctls is ENOTTY.
ENOSYS would mean that the ioctl() syscall itself is not implemented,
which is very improbable and unexpected for userspace.
ENOTTY means "Inappropriate ioctl for device". This is what the kernel
returns on unknown ioctls, what qemu is trying to express and what
userspace is prepared to handle.
Signed-off-by: Thomas Weißschuh <thomas@t-8ch.de> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Message-Id: <20230426070659.80649-1-thomas@t-8ch.de> Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Afonso Bordado [Sun, 5 Mar 2023 14:34:37 +0000 (14:34 +0000)]
linux-user: Emulate /proc/cpuinfo output for riscv
RISC-V does not expose all extensions via hwcaps, thus some userspace
applications may want to query these via /proc/cpuinfo.
Currently when querying this file the host's file is shown instead
which is slightly confusing. Emulate a basic /proc/cpuinfo file
with mmu info and an ISA string.
Signed-off-by: Afonso Bordado <afonsobordado@gmail.com> Reviewed-by: Palmer Dabbelt <palmer@rivosinc.com> Acked-by: Palmer Dabbelt <palmer@rivosinc.com> Reviewed-by: Laurent Vivier <laurent@vivier.eu> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com>
Message-Id: <167873059442.9885.15152085316575248452-0@git.sr.ht>
[lv: removed the test that fails in CI for unknown reason] Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Disconnect guest page size from TCG compilation.
While this could be done via exec/target_page.h, we want to cache
the value across multiple memory access operations, so we might
as well initialize this early.
The changes within tcg/ are entirely mechanical:
sed -i s/TARGET_PAGE_BITS/s->page_bits/g
sed -i s/TARGET_PAGE_MASK/s->page_mask/g
Reviewed-by: Anton Johansson <anjo@rev.ng> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Because of its use on tgen_arithi, this value must be a signed
32-bit quantity, as that is what may be encoded in the insn.
The truncation of the value to unsigned for 32-bit guests is
done via the REX bit via 'trexw'.
Removes the only uses of target_ulong from this tcg backend.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Since TCG_TYPE_I32 values are kept zero-extended in registers, via
omission of the REXW bit, we need not extend if the register matches.
This is already relied upon by qemu_{ld,st}.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We now have the address size as part of the opcode, so
we no longer need to test TARGET_LONG_BITS. We can use
uint64_t for target_ulong, as passed into load/store helpers.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
tcg: Split INDEX_op_qemu_{ld,st}* for guest address size
For 32-bit hosts, we cannot simply rely on TCGContext.addr_bits,
as we need one or two host registers to represent the guest address.
Create the new opcodes and update all users. Since we have not
yet eliminated TARGET_LONG_BITS, only one of the two opcodes will
ever be used, so we can get away with treating them the same in
the backends.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Expand from TCGv to TCGTemp inline in the translators,
and validate that the size matches tcg_ctx->addr_type.
These inlines will eventually be seen only by target-specific code.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
We already pass uint64_t to restore_state_to_opc; this changes all
of the other uses from insn_start through the encoding to decoding.
Reviewed-by: Anton Johansson <anjo@rev.ng> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
No change to the ultimate load/store routines yet, so some atomicity
conditions not yet honored, but plumbs the change to alignment through
the relevant functions.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
No change to the ultimate load/store routines yet, so some atomicity
conditions not yet honored, but plumbs the change to alignment through
the relevant functions.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
With x86_64 as host, we do not have any temporaries with which to
resolve cycles, but we do have xchg. As a side bonus, the set of
graphs that can be made with 3 nodes and all nodes conflicting is
small: two. We can solve the cycle with a single temp.
This is required for x86_64 to handle stores of i128: 1 address
register and 2 data registers.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Add opcodes for backend support for 128-bit memory operations.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
The system is required to emulate unaligned accesses, even if the
hardware does not support it. The resulting trap may or may not
be more efficient than the qemu slow path. There are linux kernel
patches in flight to allow userspace to query hardware support;
we can re-evaluate whether to enable this by default after that.
In the meantime, softmmu now matches useronly, where we already
assumed that unaligned accesses are supported.
Reviewed-by: LIU Zhiwei <zhiwei_liu@linux.alibaba.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Drop the target-specific trampolines for the standard slow path.
This lets us use tcg_out_helper_{ld,st}_args, and handles the new
atomicity bits within MemOp.
At the same time, use the full load/store helpers for user-only mode.
Drop inline unaligned access support for user-only mode, as it does
not handle atomicity.
Use TCG_REG_T[1-3] in the tlb lookup, instead of TCG_REG_O[0-2].
This allows the constraints to be simplified.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>