qemu makes it possible to disable link at tap which is not communicated
to the guest but causes all packets to be dropped.
When vhost-net is enabled, vhost needs to be aware of both the virtio
link_down and the peer link_down. we switch to userspace emulation when
either is down.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reported-by: pradeep <psuriset@linux.vnet.ibm.com> Acked-by: Alex Williamson <alex.williamson@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
qemu makes it possible to disable link at tap which is not communicated
to the guest but causes all packets to be dropped.
This works for virtio userspace, as qemu stops giving it packets, but
not for virtio-net connected to vhost-net as that does not get notified
about this change.
Notify peer when this happens, which will then be used by the follow-up
patch to stop/start vhost-net.
Note: it might be a good idea to make peer link status match tap in this
case, so the guest gets an event and updates the carrier state. For now
stay bug for bug compatible with what we used to have in userspace.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reported-by: pradeep <psuriset@linux.vnet.ibm.com> Acked-by: Alex Williamson <alex.williamson@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Peter Maydell [Tue, 15 Feb 2011 13:44:49 +0000 (13:44 +0000)]
target-arm: Fix unsigned VQRSHL by large shift counts
Correctly handle VQRSHL of unsigned values by a shift count of the
width of the data type or larger, which must be special-cased in the
qrshl_u* helper functions.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Peter Maydell [Tue, 15 Feb 2011 13:44:48 +0000 (13:44 +0000)]
target-arm: Fix signed VQRSHL by large shift counts
Handle the case of signed VQRSHL by a shift count of the width of the
data type or larger, which must be special cased in the qrshl_s*
helper functions.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Peter Maydell [Tue, 15 Feb 2011 13:44:42 +0000 (13:44 +0000)]
target-arm: Fix signed VRSHL by large shift counts
Correctly handle VRSHL of signed values by a shift count of the
width of the data type or larger, which must be special-cased in the
rshl_s* helper functions.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Christophe Lyon [Tue, 15 Feb 2011 13:44:41 +0000 (13:44 +0000)]
target-arm: Fix rounding constant addition for Neon shifts
Handle cases where adding the rounding constant could overflow in Neon
shift instructions: VRSHR, VRSRA, VQRSHRN, VQRSHRUN, VRSHRN.
Signed-off-by: Christophe Lyon <christophe.lyon@st.com>
[peter.maydell@linaro.org: fix handling of large shifts in rshl_s32,
calculate signed saturated value as other functions do.] Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Peter Maydell [Mon, 14 Feb 2011 10:22:49 +0000 (10:22 +0000)]
target-arm: Move Neon VZIP to helper functions
Move the implementation of the Neon VUZP unzip instruction from inline
code to helper functions. (At 50+ TCG ops it was well over the
recommended limit for coding inline.) The helper implementations also
give the correct answers where the inline implementation did not.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Peter Maydell [Mon, 14 Feb 2011 10:22:48 +0000 (10:22 +0000)]
target-arm: Move Neon VUZP to helper functions
Move the implementation of the Neon VUZP unzip instruction from inline
code to helper functions. (At 50+ TCG ops it was well over the
recommended limit for coding inline.) The helper implementations also
fix the handling of the quadword version of the instruction.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Juha Riihimäki [Fri, 11 Feb 2011 13:35:25 +0000 (13:35 +0000)]
target-arm: Correct conversion of Thumb Neon dp encodings into ARM
We handle Thumb Neon data processing instructions by converting them
into the equivalent ARM encoding, as the two are very close. However
the ARM encoding should have bit 28 set, not clear. This wasn't causing
any problems because we don't actually look at that bit during decode;
however it is better to do the conversion correctly to avoid problems
later if we add checks to UNDEF on SBZ/SBO bits.
Signed-off-by: Juha Riihimäki <juha.riihimaki@nokia.com> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Since configure guarantees us that we have pthreads on all hosts
except mingw (which doesn't support a USER_ONLY config), we can
and should use the pthread_mutex based implementation of spin_lock()
and spin_unlock() in all USER_ONLY cases. This means that all the
inline-native-assembly code supporting the "USER_ONLY but not USE_NPTL"
case can go away.
The not-USER_ONLY case remains as empty implementations; there is
no change in behaviour here.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
The spec says: Any descriptor with a non-zero status byte has been
processed by the hardware, and is ready to be handled by the software.
Thus, once we change a descriptor status to non-zero we should
never move the head backwards and try to reuse this
descriptor from hardware.
This actually happened with a multibuffer packet
that arrives when we don't have enough buffers.
Fix by checking that we have enough buffers upfront
so we never need to discard the packet midway through.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Alex Williamson <alex.williamson@redhat.com> Acked-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
The e1000 spec says: if software statically allocates
buffers, and uses memory read to check for completed descriptors, it
simply has to zero the status byte in the descriptor to make it ready
for reuse by hardware. This is not a hardware requirement (moving the
hardware tail pointer is), but is necessary for performing an in–memory
scan.
Thus the guest does not have to clear the status byte. In case it
doesn't we need to clear EOP for all descriptors
except the last. While I don't know of any such guests,
it's probably a good idea to stick to the spec.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reported-by: Juan Quintela <quintela@redhat.com> Acked-by: Alex Williamson <alex.williamson@redhat.com> Acked-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
e1000 supports multi-buffer packets larger than rxbuf_size.
This fixes the following (on linux):
- in guest: ifconfig eth1 mtu 16110
- in host: ifconfig tap0 mtu 16110
ping -s 16082 <guest-ip>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com> Acked-by: Alex Williamson <alex.williamson@redhat.com> Acked-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Aurelien Jarno [Wed, 9 Feb 2011 18:35:50 +0000 (19:35 +0100)]
target-i386: set target_phys_bits to 64
qemu i386 used to support more than 4GB of RAM through PAE, but it has
been disabled for an unknown reason. Reenable it.
Note that simply running qemu x86_64 and emulating a 32-bit CPU is not
a solution to this problem as it is about 15% slower (it needs to
emulate 64 bit registers even if half of them are not used). On the
other hand, I haven't seen any measurable impact by switching
target_phys_bits to 64.
Aurelien Jarno [Sun, 20 Feb 2011 13:47:48 +0000 (14:47 +0100)]
Merge branch 'linux-user-for-upstream' of git://gitorious.org/qemu-maemo/qemu
* 'linux-user-for-upstream' of git://gitorious.org/qemu-maemo/qemu:
linux-user: correct core dump format
linux-user: Define target alignment size
linux-user: Support the epoll syscalls
linux-user: in linux-user/strace.c, tswap() is useless
linux-user: add rmdir() strace
- it doesn't pad the note size to sizeof(int32_t).
On m68k the NT_PRSTATUS note size is 154 and
must not be rounded up to 156, because this value is checked by
objdump and gdb.
"gdb" symptoms:
"warning: Couldn't find general-purpose registers in core file."
Peter Maydell [Tue, 15 Feb 2011 18:35:05 +0000 (18:35 +0000)]
linux-user: Support the epoll syscalls
Support the epoll family of syscalls: epoll_create(), epoll_create1(),
epoll_ctl(), epoll_wait() and epoll_pwait(). Note that epoll_create1()
and epoll_pwait() are later additions, so we have to test separately
in configure for their presence.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Riku Voipio <riku.voipio@nokia.com>
Bruce Rogers [Sat, 5 Feb 2011 21:47:56 +0000 (14:47 -0700)]
PATCH] slirp: fix buffer overrun
Since the addition of the slirp member to struct mbuf, the value of
SLIRP_MSIZE and the initialization of m_size have not been correct,
resulting in overrunning the end of the malloc'd buffer in some cases.
Signed-off-by: Bruce Rogers <brogers@novell.com> Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Jan Kiszka [Mon, 7 Feb 2011 11:19:26 +0000 (12:19 +0100)]
kvm: x86: Introduce kvmclock device to save/restore its state
If kvmclock is used, which implies the kernel supports it, register a
kvmclock device with the sysbus. Its main purpose is to save and restore
the kernel state on migration, but this will also allow to visualize it
one day.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> CC: Glauber Costa <glommer@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Jan Kiszka [Mon, 7 Feb 2011 11:19:25 +0000 (12:19 +0100)]
kvm: Make kvm_state globally available
KVM-assisted devices need access to it but we have no clean channel to
distribute a reference. As a workaround until there is a better
solution, export kvm_state for global use, though use should remain
restricted to the mentioned scenario.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Anthony PERARD [Mon, 7 Feb 2011 11:19:23 +0000 (12:19 +0100)]
Introduce log_start/log_stop in CPUPhysMemoryClient
In order to use log_start/log_stop with Xen as well in the vga code,
this two operations have been put in CPUPhysMemoryClient.
The two new functions cpu_physical_log_start,cpu_physical_log_stop are
used in hw/vga.c and replace the kvm_log_start/stop. With this, vga does
no longer depends on kvm header.
[ Jan: rebasing and style fixlets ]
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com> Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Jan Kiszka [Mon, 7 Feb 2011 11:19:22 +0000 (12:19 +0100)]
kvm: Remove unneeded memory slot reservation
The number of slots and the location of private ones changed several
times in KVM's early days. However, it's stable since 2.6.29 (our
required baseline), and slots 8..11 are no longer reserved since then.
So remove this unneeded restriction.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> CC: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Jan Kiszka [Mon, 7 Feb 2011 11:19:19 +0000 (12:19 +0100)]
kvm: x86: Prepare VCPU loop for in-kernel irqchip
Effectively no functional change yet as kvm_irqchip_in_kernel still only
returns 0, but this patch will allow qemu-kvm to adopt the VCPU loop of
upsteam KVM.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Jan Kiszka [Mon, 7 Feb 2011 11:19:18 +0000 (12:19 +0100)]
kvm: Separate TCG from KVM cpu execution
Mixing up TCG bits with KVM already led to problems around eflags
emulation on x86. Moreover, quite some code that TCG requires on cpu
enty/exit is useless for KVM. So dispatch between tcg_cpu_exec and
kvm_cpu_exec as early as possible.
The core logic of cpu_halted from cpu_exec is added to
kvm_arch_process_irqchip_events. Moving away from cpu_exec makes
exception_index meaningless for KVM, we can simply pass the exit reason
directly (only "EXCP_DEBUG vs. rest" is relevant).
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Jan Kiszka [Mon, 7 Feb 2011 11:19:17 +0000 (12:19 +0100)]
Move debug exception handling out of cpu_exec
To prepare splitting up KVM and TCG CPU entry/exit, move the debug
exception into cpus.c and invoke cpu_handle_debug_exception on return
from qemu_cpu_exec.
This also allows to clean up the debug request signaling: We can assign
the job of informing main-loop to qemu_system_debug_request and stop the
calling cpu directly in cpu_handle_debug_exception. That means a debug
stop will now only be signaled via debug_requested and not additionally
via vmstop_requested.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Jan Kiszka [Mon, 7 Feb 2011 11:19:16 +0000 (12:19 +0100)]
Refactor debug and vmstop request interface
Instead of fiddling with debug_requested and vmstop_requested directly,
introduce qemu_system_debug_request and turn qemu_system_vmstop_request
into a public interface. This aligns those services with exiting ones in
vl.c.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Jan Kiszka [Wed, 9 Feb 2011 15:29:40 +0000 (16:29 +0100)]
Improve vm_stop reason declarations
Define and use dedicated constants for vm_stop reasons, they actually
have nothing to do with the EXCP_* defines used so far. At this chance,
specify more detailed reasons so that VM state change handlers can
evaluate them.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Jan Kiszka [Wed, 9 Feb 2011 15:29:37 +0000 (16:29 +0100)]
Refactor cpu_has_work/any_cpu_has_work in cpus.c
Avoid duplicate use of the function name cpu_has_work, it's confusing,
also their scope. Refactor cpu_has_work to cpu_thread_is_idle and do the
same with any_cpu_has_work.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Jan Kiszka [Mon, 7 Feb 2011 11:19:12 +0000 (12:19 +0100)]
Refactor kvm&tcg function names in cpus.c
Pure interface cosmetics: Ensure that only kvm core services (as
declared in kvm.h) start with "kvm_". Prepend "qemu_" to those that
violate this rule in cpus.c. Also rename the corresponding tcg functions
for the sake of consistency.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Glauber Costa [Thu, 3 Feb 2011 19:19:53 +0000 (14:19 -0500)]
kvm: make tsc stable over migration and machine start
If the machine is stopped, we should not record two different tsc values
upon a save operation. The same problem happens with kvmclock.
But kvmclock is taking a different diretion, being now seen as a separate
device. Since this is unlikely to happen with the tsc, I am taking the
approach here of simply registering a handler for state change, and
using a per-CPUState variable that prevents double updates for the TSC.
Signed-off-by: Glauber Costa <glommer@redhat.com> CC: Jan Kiszka <jan.kiszka@web.de> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Jan Kiszka [Tue, 1 Feb 2011 21:16:02 +0000 (22:16 +0100)]
kvm: Leave kvm_cpu_exec directly after KVM_EXIT_SHUTDOWN
The reset we issue on KVM_EXIT_SHUTDOWN implies that we should also
leave the VCPU loop. As we now check for exit_request which is set by
qemu_system_reset_request, this bug is no longer critical. Still it's an
unneeded extra turn.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Jan Kiszka [Tue, 1 Feb 2011 21:16:00 +0000 (22:16 +0100)]
kvm: Unconditionally reenter kernel after IO exits
KVM requires to reenter the kernel after IO exits in order to complete
instruction emulation. Failing to do so will leave the kernel state
inconsistently behind. To ensure that we will get back ASAP, we issue a
self-signal that will cause KVM_RUN to return once the pending
operations are completed.
We can move kvm_arch_process_irqchip_events out of the inner VCPU loop.
The only state that mattered at its old place was a pending INIT
request. Catch it in kvm_arch_pre_run and also trigger a self-signal to
process the request on next kvm_cpu_exec.
This patch also fixes the missing exit_request check in kvm_cpu_exec in
the CONFIG_IOTHREAD case.
Jan Kiszka [Tue, 1 Feb 2011 21:15:58 +0000 (22:15 +0100)]
kvm: Add MCE signal support for !CONFIG_IOTHREAD
Currently, we only configure and process MCE-related SIGBUS events if
CONFIG_IOTHREAD is enabled. The groundwork is laid, we just need to
factor out the required handler registration and system configuration.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> CC: Huang Ying <ying.huang@intel.com> CC: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> CC: Jin Dongming <jin.dongming@np.css.fujitsu.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Jan Kiszka [Tue, 1 Feb 2011 21:15:57 +0000 (22:15 +0100)]
kvm: Fix race between timer signals and vcpu entry under !IOTHREAD
Found by Stefan Hajnoczi: There is a race in kvm_cpu_exec between
checking for exit_request on vcpu entry and timer signals arriving
before KVM starts to catch them. Plug it by blocking both timer related
signals also on !CONFIG_IOTHREAD and process those via signalfd.
As this fix depends on real signalfd support (otherwise the timer
signals only kick the compat helper thread, and the main thread hangs),
we need to detect the invalid constellation and abort configure.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> CC: Stefan Hajnoczi <stefanha@linux.vnet.ibm.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Jan Kiszka [Tue, 1 Feb 2011 21:15:54 +0000 (22:15 +0100)]
kvm: Refactor qemu_kvm_eat_signals
We do not use the timeout, so drop its logic. As we always poll our
signals, we do not need to drop the global lock. Removing those calls
allows some further simplifications. Also fix the error processing of
sigpending at this chance.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Jan Kiszka [Tue, 1 Feb 2011 21:15:53 +0000 (22:15 +0100)]
kvm: Set up signal mask also for !CONFIG_IOTHREAD
Block SIG_IPI, unblock it during KVM_RUN, just like in io-thread mode.
It's unused so far, but this infrastructure will be required for
self-IPIs and to process SIGBUS plus, in KVM mode, SIGIO and SIGALRM. As
Windows doesn't support signal services, we need to provide a stub for
the init function.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Jan Kiszka [Tue, 1 Feb 2011 21:15:52 +0000 (22:15 +0100)]
Refactor signal setup functions in cpus.c
Move {tcg,kvm}_init_ipi and block_io_signals to avoid prototypes, rename
the former two to clarify that they deal with more than SIG_IPI. No
functional changes - except for the tiny fixup of strerror usage.
The forward declaration of sigbus_handler is just temporarily, it will
be moved in a succeeding patch. dummy_signal is moved into the !_WIN32
block as we will soon need it also for !CONFIG_IOTHREAD.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Jan Kiszka [Tue, 1 Feb 2011 21:15:51 +0000 (22:15 +0100)]
kvm: Provide sigbus services arch-independently
Provide arch-independent kvm_on_sigbus* stubs to remove the #ifdef'ery
from cpus.c. This patch also fixes --disable-kvm build by providing the
missing kvm_on_sigbus_vcpu kvm-stub.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Acked-by: Alexander Graf <agraf@suse.de> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Jan Kiszka [Tue, 1 Feb 2011 21:15:47 +0000 (22:15 +0100)]
Flatten the main loop
First of all, vm_can_run is a misnomer, it actually means "no request
pending". Moreover, there is no need to check all pending requests
twice, the first time via the inner loop check and then again when
actually processing the requests. We can simply remove the inner loop
and do the checks directly.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Jan Kiszka [Tue, 1 Feb 2011 21:15:46 +0000 (22:15 +0100)]
Leave inner main_loop faster on pending requests
If there is any pending request that requires us to leave the inner loop
if main_loop, makes sure we do this as soon as possible by enforcing
non-blocking IO processing.
At this change, move variable definitions out of the inner loop to
improve readability.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Jan Kiszka [Tue, 1 Feb 2011 21:15:45 +0000 (22:15 +0100)]
Trigger exit from cpu_exec_all on pending IO events
Except for timer events, we currently do not leave the loop over all
VCPUs if an IO event was filed. That may cause unexpected IO latencies
under !CONFIG_IOTHREAD in SMP scenarios. Fix it by setting the global
exit_request which breaks the loop.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Jan Kiszka [Tue, 1 Feb 2011 21:15:44 +0000 (22:15 +0100)]
Process vmstop requests in IO thread
A pending vmstop request is also a reason to leave the inner main loop.
So far we ignored it, and pending stop requests issued over VCPU threads
were simply ignored.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Jan Kiszka [Tue, 1 Feb 2011 21:15:43 +0000 (22:15 +0100)]
Stop current VCPU on synchronous reset requests
If some I/O operation ends up calling qemu_system_reset_request in VCPU
context, we record this and inform the io-thread, but we do not
terminate the VCPU loop. This can lead to fairly unexpected behavior if
the triggering reset operation is supposed to work synchronously.
Fix this for TCG (when run in deterministic I/O mode) by setting the
VCPU on stop and issuing a cpu_exit. KVM requires some more work on its
VCPU loop.
[ ported from qemu-kvm ]
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
Jan Kiszka [Tue, 1 Feb 2011 21:15:42 +0000 (22:15 +0100)]
Prevent abortion on multiple VCPU kicks
If we call qemu_cpu_kick more than once before the target was able to
process the signal, pthread_kill will fail, and qemu will abort. Prevent
this by avoiding the redundant signal.
This logic can be found in qemu-kvm as well.
Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>