Max Reitz [Mon, 11 Jul 2016 13:22:46 +0000 (15:22 +0200)]
iotests: Make 157 actually format-agnostic
iotest 157 pretends not to care about the image format used, but in fact
it does due to the format name not being filtered in its output. This
patch adds filtering and changes the reference output accordingly.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20160711132246.3152-1-mreitz@redhat.com Reviewed-by: John Snow <jsnow@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
Max Reitz [Mon, 11 Jul 2016 13:54:52 +0000 (15:54 +0200)]
vvfat: Fix qcow write target driver specification
First, bdrv_open_child() expects all options for the child to be
prefixed by the child's name (and a separating dot). Second,
bdrv_open_child() does not take ownership of the QDict passed to it but
only extracts all options for the child, so if a QDict is created for
the sole purpose of passing it to bdrv_open_child(), it needs to be
freed afterwards.
This patch makes vvfat adhere to both of these rules.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20160711135452.11304-1-mreitz@redhat.com Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
Lin Ma [Thu, 7 Jul 2016 05:26:04 +0000 (13:26 +0800)]
hmp: show all of snapshot info on every block dev in output of 'info snapshots'
Currently, the output of 'info snapshots' shows fully available snapshots.
It's opaque, hides some snapshot information to users. It's not convenient
if users want to know more about all of snapshot information on every block
device via monitor.
Follow Kevin's and Max's proposals, The patch makes the output more detailed:
(qemu) info snapshots
List of snapshots present on all disks:
ID TAG VM SIZE DATE VM CLOCK
-- checkpoint-1 165M 2016-05-22 16:58:07 00:02:06.813
List of partial (non-loadable) snapshots on 'drive_image1':
ID TAG VM SIZE DATE VM CLOCK
1 snap1 0 2016-05-22 16:57:31 00:01:30.567
Signed-off-by: Lin Ma <lma@suse.com>
Message-id: 1467869164-26688-3-git-send-email-lma@suse.com Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
Lin Ma [Thu, 7 Jul 2016 05:26:03 +0000 (13:26 +0800)]
hmp: use snapshot name to determine whether a snapshot is 'fully available'
Currently qemu uses snapshot id to determine whether a snapshot is fully
available, It causes incorrect output in some scenario.
For instance:
(qemu) info block
drive_image1 (#block113): /opt/vms/SLES12-SP1-JeOS-x86_64-GM/disk0.qcow2
(qcow2)
Cache mode: writeback
drive_image2 (#block349): /opt/vms/SLES12-SP1-JeOS-x86_64-GM/disk1.qcow2
(qcow2)
Cache mode: writeback
(qemu)
(qemu) info snapshots
There is no snapshot available.
(qemu)
(qemu) snapshot_blkdev_internal drive_image1 snap1
(qemu)
(qemu) info snapshots
There is no suitable snapshot available
(qemu)
(qemu) savevm checkpoint-1
(qemu)
(qemu) info snapshots
ID TAG VM SIZE DATE VM CLOCK
1 snap1 0 2016-05-22 16:57:31 00:01:30.567
(qemu)
$ qemu-img snapshot -l disk0.qcow2
Snapshot list:
ID TAG VM SIZE DATE VM CLOCK
1 snap1 0 2016-05-22 16:57:31 00:01:30.567
2 checkpoint-1 165M 2016-05-22 16:58:07 00:02:06.813
$ qemu-img snapshot -l disk1.qcow2
Snapshot list:
ID TAG VM SIZE DATE VM CLOCK
1 checkpoint-1 0 2016-05-22 16:58:07 00:02:06.813
The patch uses snapshot name instead of snapshot id to determine whether a
snapshot is fully available and uses '--' instead of snapshot id in output
because the snapshot id is not guaranteed to be the same on all images.
For instance:
(qemu) info snapshots
List of snapshots present on all disks:
ID TAG VM SIZE DATE VM CLOCK
-- checkpoint-1 165M 2016-05-22 16:58:07 00:02:06.813
Signed-off-by: Lin Ma <lma@suse.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 1467869164-26688-2-git-send-email-lma@suse.com Signed-off-by: Max Reitz <mreitz@redhat.com>
Alberto Garcia [Fri, 8 Jul 2016 14:03:01 +0000 (17:03 +0300)]
qemu-iotests: Test naming of throttling groups
Throttling groups are named using the 'group' parameter of the
block_set_io_throttle command and the throttling.group command-line
option. If that parameter is unspecified the groups get the name of
the block device.
This patch adds a new test to check the naming of throttling groups.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Message-id: d87d02823a6b91609509d8bb18e2f5dbd9a6102c.1467986342.git.berto@igalia.com Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
Alberto Garcia [Fri, 8 Jul 2016 14:03:00 +0000 (17:03 +0300)]
blockdev: Fix regression with the default naming of throttling groups
When I/O limits are set for a block device, the name of the throttling
group is taken from the BlockBackend if the user doesn't specify one.
Commit efaa7c4eeb7490c6f37f3 moved the naming of the BlockBackend in
blockdev_init() to the end of the function, after I/O limits are set.
The consequence is that the throttling group gets an empty name.
Signed-off-by: Alberto Garcia <berto@igalia.com> Reported-by: Stefan Hajnoczi <stefanha@redhat.com> Cc: Max Reitz <mreitz@redhat.com> Cc: qemu-stable@nongnu.org
Message-id: af5cd58bd2c4b9f6c57f260d9cfe586b9fb7d34d.1467986342.git.berto@igalia.com
[mreitz: Use existing "id" variable instead of new "blk_id"] Signed-off-by: Max Reitz <mreitz@redhat.com>
Reda Sallahi [Thu, 7 Jul 2016 08:42:49 +0000 (10:42 +0200)]
vmdk: fix metadata write regression
Commit "cdeaf1f vmdk: add bdrv_co_write_zeroes" causes a regression on
writes. It writes metadata after every write instead of doing it only once
for each cluster.
vmdk_pwritev() writes metadata whenever m_data is set as valid so this patch
sets m_data as valid only when we have a new cluster which hasn't been
allocated before or a zero grain.
Signed-off-by: Reda Sallahi <fullmanet@gmail.com>
Message-id: 20160707084249.29084-1-fullmanet@gmail.com Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
Sascha Silbe [Tue, 28 Jun 2016 15:28:41 +0000 (17:28 +0200)]
Improve block job rate limiting for small bandwidth values
ratelimit_calculate_delay() previously reset the accounting every time
slice, no matter how much data had been processed before. This had (at
least) two consequences:
1. The minimum speed is rather large, e.g. 5 MiB/s for commit and stream.
Not sure if there are real-world use cases where this would be a
problem. Mirroring and backup over a slow link (e.g. DSL) would
come to mind, though.
2. Tests for block job operations (e.g. cancel) were rather racy
All block jobs currently use a time slice of 100ms. That's a
reasonable value to get smooth output during regular
operation. However this also meant that the state of block jobs
changed every 100ms, no matter how low the configured limit was. On
busy hosts, qemu often transferred additional chunks until the test
case had a chance to cancel the job.
Fix the block job rate limit code to delay for more than one time
slice to address the above issues. To make it easier to handle
oversized chunks we switch the semantics from returning a delay
_before_ the current request to a delay _after_ the current
request. If necessary, this delay consists of multiple time slice
units.
Since the mirror job sends multiple chunks in one go even if the rate
limit was exceeded in between, we need to keep track of the start of
the current time slice so we can correctly re-compute the delay for
the updated amount of data.
The minimum bandwidth now is 1 data unit per time slice. The block
jobs are currently passing the amount of data transferred in sectors
and using 100ms time slices, so this translates to 5120
bytes/second. With chunk sizes usually being O(512KiB), tests have
plenty of time (O(100s)) to operate on block jobs. The chance of a
race condition now is fairly remote, except possibly on insanely
loaded systems.
Signed-off-by: Sascha Silbe <silbe@linux.vnet.ibm.com>
Message-id: 1467127721-9564-2-git-send-email-silbe@linux.vnet.ibm.com Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
Max Reitz [Mon, 20 Jun 2016 14:26:23 +0000 (16:26 +0200)]
qcow2: Fix qcow2_get_cluster_offset()
Recently, qcow2_get_cluster_offset() has been changed to work with bytes
instead of sectors. This invalidated some assertions and introduced a
possible integer multiplication overflow.
This patch removes the now wrong assertion, adding comments and more
assertions to prove its correctness (and fixing the overflow which would
become apparent with the original assertion removed).
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20160620142623.24471-3-mreitz@redhat.com Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
Max Reitz [Mon, 20 Jun 2016 14:26:22 +0000 (16:26 +0200)]
qemu-io: Use correct range limitations
create_iovec() has a comment lamenting the lack of SIZE_T_MAX. Since
there actually is a SIZE_MAX, use it.
Two places use INT_MAX for checking the upper bound of a sector count
that is used as an argument for a blk_*() function (blk_discard() and
blk_write_compressed(), respectively). BDRV_REQUEST_MAX_SECTORS should
be used instead.
And finally, do_co_pwrite_zeroes() used to similarly check that the
sector count does not exceed INT_MAX. However, this function is now
backed by blk_co_pwrite_zeroes() which takes bytes as an argument
instead of sectors. Therefore, it should be the byte count that does not
exceed INT_MAX, not the sector count.
Max Reitz [Wed, 15 Jun 2016 15:36:29 +0000 (17:36 +0200)]
qemu-img: Use strerror() for generic resize error
Emitting the plain error number is not very helpful. Use strerror()
instead.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20160615153630.2116-2-mreitz@redhat.com Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
Kevin Wolf [Thu, 30 Jun 2016 13:52:37 +0000 (15:52 +0200)]
block: Remove BB options from blockdev-add
werror/rerror are now available as qdev options. The stats-* options are
removed without an existing replacement; they should probably be
configurable with a separate QMP command like I/O throttling settings.
Removing id is left for another day because this involves updating
qemu-iotests cases to use node-name for everything. Before we can do
that, however, all QMP commands must support node-name.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
Kevin Wolf [Wed, 29 Jun 2016 15:41:35 +0000 (17:41 +0200)]
block/qdev: Allow configuring rerror/werror with qdev properties
The rerror/werror policies are implemented in the devices, so that's
where they should be configured. In comparison to the old options in
-drive, the qdev properties are only added to those devices that
actually support them.
If the option isn't given (or "auto" is specified), the setting of the
BlockBackend is used for compatibility with the old options. For block
jobs, "auto" is the same as "enospc".
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
Kevin Wolf [Wed, 29 Jun 2016 15:38:57 +0000 (17:38 +0200)]
commit: Fix use of error handling policy
Commit implemented the 'enospc' policy as 'ignore' if the error was not
ENOSPC. The QAPI documentation promises that it's treated as 'stop'.
Using the common block job error handling function fixes this and also
adds the missing QMP event.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
Kevin Wolf [Thu, 23 Jun 2016 13:12:35 +0000 (15:12 +0200)]
block/qdev: Allow configuring WCE with qdev properties
As cache.writeback is a BlockBackend property and as such more related
to the guest device than the BlockDriverState, we already removed it
from the blockdev-add interface. This patch adds the new way to set it,
as a qdev property of the corresponding guest device.
For example: -drive if=none,file=test.img,node-name=img
-device ide-hd,drive=img,write-cache=off
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
Kevin Wolf [Tue, 21 Jun 2016 18:46:05 +0000 (20:46 +0200)]
block/qdev: Allow node name for drive properties
If a node name instead of a BlockBackend name is specified as the driver
for a guest device, an anonymous BlockBackend is created now.
The order of operations in release_drive() must be reversed in order to
avoid a use-after-free bug because now blk_detach_dev() frees the last
reference if an anonymous BlockBackend is used.
usb-storage uses a hack where it forwards its BlockBackend as a property
to another device that it internally creates. This hack must be updated
so that it doesn't drop its original BB before it can be passed to the
other device. This used to work because we always had the monitor
reference around, but with node-names the device reference is the only
one now.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
Paolo Bonzini [Mon, 4 Jul 2016 17:10:01 +0000 (19:10 +0200)]
coroutine: move entry argument to qemu_coroutine_create
In practice the entry argument is always known at creation time, and
it is confusing that sometimes qemu_coroutine_enter is used with a
non-NULL argument to re-enter a coroutine (this happens in
block/sheepdog.c and tests/test-coroutine.c). So pass the opaque value
at creation time, for consistency with e.g. aio_bh_new.
except for the aforementioned few places where the semantic patch
stumbled (as expected) and for test_co_queue, which would otherwise
produce an uninitialized variable warning.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Paolo Bonzini [Mon, 4 Jul 2016 17:10:00 +0000 (19:10 +0200)]
test-coroutine: prepare for the next patch
The next patch moves the coroutine argument from first-enter to
creation time. In this case, coroutine has not been initialized
yet when the coroutine is created, so change to a pointer.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Paolo Bonzini [Mon, 4 Jul 2016 17:09:59 +0000 (19:09 +0200)]
coroutine: use QSIMPLEQ instead of QTAILQ
CoQueue do not need to remove any element but the head of the list;
processing is always strictly FIFO. Therefore, the simpler singly-linked
QSIMPLEQ can be used instead.
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Fam Zheng [Wed, 22 Jun 2016 12:53:20 +0000 (20:53 +0800)]
raw-posix: Use qemu_dup
Signed-off-by: Fam Zheng <famz@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Fam Zheng [Wed, 22 Jun 2016 12:53:19 +0000 (20:53 +0800)]
osdep: Introduce qemu_dup
And use it in qemu_dup_flags.
Signed-off-by: Fam Zheng <famz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Alberto Garcia [Tue, 5 Jul 2016 14:29:02 +0000 (17:29 +0300)]
blockjob: Update description of the 'device' field in the QMP API
The 'device' field in all BLOCK_JOB_* events and 'block-job-*' command
is no longer the device name, but the ID of the job. This patch
updates the documentation to clarify that.
Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Alberto Garcia [Tue, 5 Jul 2016 14:29:01 +0000 (17:29 +0300)]
qemu-img: Set the ID of the block job in img_commit()
img_commit() creates a block job without an ID. This is no longer
allowed now that we require it to be unique and well-formed. We were
solving this by having a fallback in block_job_create(), but now that
we extended the API of commit_active_start() we can finally set an
explicit ID and revert that change.
Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Alberto Garcia [Tue, 5 Jul 2016 14:29:00 +0000 (17:29 +0300)]
commit: Add 'job-id' parameter to 'block-commit'
This patch adds a new optional 'job-id' parameter to 'block-commit',
allowing the user to specify the ID of the block job to be created.
Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Alberto Garcia [Tue, 5 Jul 2016 14:28:59 +0000 (17:28 +0300)]
stream: Add 'job-id' parameter to 'block-stream'
This patch adds a new optional 'job-id' parameter to 'block-stream',
allowing the user to specify the ID of the block job to be created.
The HMP 'block_stream' command remains unchanged.
Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Alberto Garcia [Tue, 5 Jul 2016 14:28:58 +0000 (17:28 +0300)]
backup: Add 'job-id' parameter to 'blockdev-backup' and 'drive-backup'
This patch adds a new optional 'job-id' parameter to 'blockdev-backup'
and 'drive-backup', allowing the user to specify the ID of the block
job to be created.
The HMP 'drive_backup' command remains unchanged.
Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Alberto Garcia [Tue, 5 Jul 2016 14:28:57 +0000 (17:28 +0300)]
mirror: Add 'job-id' parameter to 'blockdev-mirror' and 'drive-mirror'
This patch adds a new optional 'job-id' parameter to 'blockdev-mirror'
and 'drive-mirror', allowing the user to specify the ID of the block
job to be created.
The HMP 'drive_mirror' command remains unchanged.
Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Alberto Garcia [Tue, 5 Jul 2016 14:28:56 +0000 (17:28 +0300)]
blockjob: Add 'job_id' parameter to block_job_create()
When a new job is created, the job ID is taken from the device name of
the BDS. This patch adds a new 'job_id' parameter to let the caller
provide one instead.
This patch also verifies that the ID is always unique and well-formed.
This causes problems in a couple of places where no ID is being set,
because the BDS does not have a device name.
In the case of test_block_job_start() (from test-blockjob-txn.c) we
can simply use this new 'job_id' parameter to set the missing ID.
In the case of img_commit() (from qemu-img.c) we still don't have the
API to make commit_active_start() set the job ID, so we solve it by
setting a default value. We'll get rid of this as soon as we extend
the API.
Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Alberto Garcia [Tue, 5 Jul 2016 14:28:55 +0000 (17:28 +0300)]
block: Use block_job_get() in find_block_job()
find_block_job() looks for a block backend with a specified name,
checks whether it has a block job and acquires its AioContext.
We want to identify jobs by their ID and not by the block backend
they're attached to, so this patch ignores the backends altogether and
gets the job directly. Apart from making the code simpler, this will
allow us to find block jobs once they start having user-specified IDs.
To ensure backward compatibility we keep ERROR_CLASS_DEVICE_NOT_ACTIVE
as the error class if the job doesn't exist. In subsequent patches
we'll also need to keep the device name as the default job ID if the
user doesn't specify a different one.
Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Alberto Garcia [Tue, 5 Jul 2016 14:28:54 +0000 (17:28 +0300)]
blockjob: Add block_job_get()
Currently the way to look for a specific block job is to iterate the
list manually using block_job_next().
Since we want to be able to identify a job primarily by its ID it
makes sense to have a function that does just that.
Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Alberto Garcia [Tue, 5 Jul 2016 14:28:53 +0000 (17:28 +0300)]
blockjob: Update description of the 'id' field
The 'id' field of the BlockJob structure will be able to hold any ID,
not only a device name. This patch updates the description of that
field and the error messages where it is being used.
Soon we'll add the ability to set an arbitrary ID when creating a
block job.
Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Alberto Garcia [Tue, 5 Jul 2016 14:28:52 +0000 (17:28 +0300)]
stream: Fix prototype of stream_start()
'stream-start' has a parameter called 'backing-file', which is the
string to be written to bs->backing when the job finishes.
In the stream_start() implementation it is called 'backing_file_str',
but it the prototype in the header file it is called 'base_id'.
This patch fixes it so the name is the same in both cases and is
consistent with other cases (like commit_start()).
Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* remotes/armbru/tags/pull-include-2016-07-12:
cris: Fix broken header guard in hw/cris/boot.h
Clean up decorations and whitespace around header guards
Clean up ill-advised or unusual header guards
libdecnumber: Don't error out on decNumberLocal.h re-inclusion
libdecnumber: Don't fool around with guards to avoid #include
Clean up header guards that don't match their file name
Drop Emacs local variables lists redundant with .dir-locals.el
spapr_pci: Include spapr.h instead of playing games with #error
tcg: Clean up tcg-target.h header guards
linux-user: Fix broken header guard in syscall_defs.h
linux-user: Clean up hostdep.h header guards
linux-user: Clean up target_structs.h header guards
linux-user: Clean up target_signal.h header guards
linux-user: Clean up target_cpu.h header guards
linux-user: Clean up target_syscall.h header guards
target-*: Clean up cpu.h header guards
scripts: New clean-header-guards.pl
Use #include "..." for our own headers, <...> for others
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
libdecnumber: Don't fool around with guards to avoid #include
Some libdecnumber headers avoid including decNumber.h or decContext.h
again by checking their header guards. Don't. Including them
multiple times is safe, and the compiler can do it efficiently.
Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net>
These use guard symbols like TCG_TARGET_$target.
scripts/clean-header-guards.pl doesn't like them because they don't
match their file name (they should, to make guard collisions less
likely).
Clean them up: use guard symbol $target_TCG_TARGET_H for
tcg/$target/tcg-target.h.
Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net>
These headers all use QEMU_HOSTDEP_H as header guard symbol. Reuse of
the same guard symbol in multiple headers is okay as long as they
cannot be included together.
Since we can avoid guard symbol reuse easily, do so: use guard symbol
$target_HOSTDEP_H for linux-user/host/$target/hostdep.h.
Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net>
linux-user: Clean up target_structs.h header guards
These headers all use TARGET_STRUCTS_H as header guard symbol. Reuse
of the same guard symbol in multiple headers is okay as long as they
cannot be included together.
Since we can avoid guard symbol reuse easily, do so: use guard symbol
$target_TARGET_STRUCTS_H for linux-user/$target/target_structs.h.
Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net>
linux-user: Clean up target_signal.h header guards
These headers all use TARGET_SIGNAL_H as header guard symbol. Reuse
of the same guard symbol in multiple headers is okay as long as they
cannot be included together.
Since we can avoid guard symbol reuse easily, do so: use guard symbol
$target_TARGET_SIGNAL_H for linux-user/$target/target_signal.h.
Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net>
These headers all use TARGET_CPU_H as header guard symbol. Reuse of
the same guard symbol in multiple headers is okay as long as they
cannot be included together.
Since we can avoid guard symbol reuse easily, do so: use guard symbol
$target_TARGET_CPU_H for linux-user/$target/target_cpu.h.
Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net>
linux-user: Clean up target_syscall.h header guards
Some of them use guard symbol TARGET_SYSCALL_H, but we also have
CRIS_SYSCALL_H, MICROBLAZE_SYSCALLS_H, TILEGX_SYSCALLS_H and
__UC32_SYSCALL_H__. They all upset scripts/clean-header-guards.pl.
Reuse of the same guard symbol TARGET_SYSCALL_H in multiple headers is
okay as long as they cannot be included together. The script can't
tell, so it warns.
The script dislikes the other guard symbols, too. They don't match
their file name (they should, to make guard collisions less likely),
and __UC32_SYSCALL_H__ is a reserved identifier.
Clean them all up: use guard symbol $target_TARGET_SYSCALL_H for
linux-user/$target/target_sycall.h.
Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net>
Most of them use guard symbols like CPU_$target_H, but we also have
__MIPS_CPU_H__ and __TRICORE_CPU_H__. They all upset
scripts/clean-header-guards.pl.
The script dislikes CPU_$target_H because they don't match their file
name (they should, to make guard collisions less likely). The others
are reserved identifiers.
Clean them all up: use guard symbol $target_CPU_H for
target-$target/cpu.h.
Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net>
where HEADER_NAME_H is a symbol unique to this header.
The endif may be optionally decorated like this:
#endif /* HEADER_NAME_H */
Unconventional ways present in our code:
* Identifiers reserved for any use:
#define _FILEOP_H
* Lowercase (bad idea for object-like macros):
#define __linux_video_vga_h__
* Roundabout ways to say the same thing (and hide from grep):
#if !defined(__PPC_MAC_H__)
#endif /* !defined(__PPC_MAC_H__) */
* Redundant values:
#define HW_ALPHA_H 1
* Funny redundant values:
# define PXA_H "pxa.h"
* Decorations with bangs:
#endif /* !QEMU_ARM_GIC_INTERNAL_H */
The negation actually makes sense, but almost all our header guard
#endif decorations don't negate.
* Useless decorations:
#endif /* audio.h */
Header guards are not the place to show off creativity. This script
normalizes them to the conventional way, and cleans up whitespace
while there. It warns when it renames guard symbols, and explains how
to find occurences of these symbols that may have to be updated
manually.
Another issue is use of the same guard symbol in multiple headers.
That's okay only for headers that cannot be used together, such as the
*-user/*/target_syscall.h. This script can't tell, so it warns when
it sees a reuse.
The script also warns when preprocessing a header with its guard
symbol defined produces anything but whitespace.
The next commits will put the script to use.
Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net>
Use #include "..." for our own headers, <...> for others
Tracked down with an ugly, brittle and probably buggy Perl script.
Also move includes converted to <...> up so they get included before
ours where that's obviously okay.
Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Tested-by: Eric Blake <eblake@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net>
Peter Maydell [Thu, 7 Jul 2016 16:21:00 +0000 (17:21 +0100)]
bswap.h: Document cpu_to_* and *_to_cpu conversion functions
Add a documentation comment describing the functions for
converting between the cpu and little or bigendian formats.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 1467908460-27048-6-git-send-email-peter.maydell@linaro.org
Peter Maydell [Thu, 7 Jul 2016 16:20:59 +0000 (17:20 +0100)]
bswap.h: Fix comment typo
Fix a typo in a comment.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Stefan Weil <sw@weilnetz.de>
Message-id: 1467908460-27048-5-git-send-email-peter.maydell@linaro.org
Peter Maydell [Thu, 7 Jul 2016 16:20:58 +0000 (17:20 +0100)]
bswap.h: Remove unused cpu_to_*w() and *_to_cpup()
Now that all uses of cpu_to_*w() and *_to_cpup() have been replaced
with either ld*_p()/st*_p() or by doing direct dereferences and
using the cpu_to_*()/*_to_cpu() byteswap functions, we can remove
the unused implementations.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 1467908460-27048-4-git-send-email-peter.maydell@linaro.org
Peter Maydell [Thu, 7 Jul 2016 16:20:57 +0000 (17:20 +0100)]
hw/bt: Don't use cpu_to_*w() and *_to_cpup()
Don't use cpu_to_*w() and *_to_cpup() to do byte-swapped loads
and stores; instead use ld*_p() and st*_p() which correctly handle
misaligned accesses.
Bring the HNDL() macro into line with how we deal with
PARAMHANDLE(), by using cpu_to_le16() rather than an ifdef
HOST_WORDS_BIGENDIAN.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 1467908460-27048-3-git-send-email-peter.maydell@linaro.org
Peter Maydell [Thu, 7 Jul 2016 16:20:56 +0000 (17:20 +0100)]
fsdev/9p-iov-marshal.c: Don't use cpu_to_*w() functions
Don't use the cpu_to_*w() functions, which we are trying to deprecate.
Instead just use cpu_to_*() to do the byteswap, which brings the
code in the marshal function in line with that in the unmarshal.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 1467908460-27048-2-git-send-email-peter.maydell@linaro.org
Sergey Sorokin [Tue, 14 Jun 2016 12:26:17 +0000 (15:26 +0300)]
Fix confusing argument names in some common functions
There are functions tlb_fill(), cpu_unaligned_access() and
do_unaligned_access() that are called with access type and mmu index
arguments. But these arguments are named 'is_write' and 'is_user' in their
declarations. The patches fix the arguments to avoid a confusion.
Signed-off-by: Sergey Sorokin <afarallax@yandex.ru> Reviewed-by: Eduardo Habkost <ehabkost@redhat.com> Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-id: 1465907177-1399402-1-git-send-email-afarallax@yandex.ru Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Peter Maydell [Tue, 12 Jul 2016 11:34:41 +0000 (12:34 +0100)]
Merge remote-tracking branch 'remotes/lalrae/tags/mips-20160712' into staging
MIPS patches 2016-07-12
Changes:
* support 10-bit ASIDs
* MIPS64R6-generic renamed to I6400
* initial GIC support
* implement RESET_BASE register in CM GCR
# gpg: Signature made Tue 12 Jul 2016 11:49:50 BST
# gpg: using RSA key 0x52118E3C0B29DA6B
# gpg: Good signature from "Leon Alrae <leon.alrae@imgtec.com>"
# Primary key fingerprint: 8DD3 2F98 5495 9D66 35D4 4FC0 5211 8E3C 0B29 DA6B
* remotes/lalrae/tags/mips-20160712:
target-mips: enable 10-bit ASIDs in I6400 CPU
target-mips: support CP0.Config4.AE bit
target-mips: change ASID type to hold more than 8 bits
target-mips: add ASID mask field and replace magic values
target-mips: replace MIPS64R6-generic with the real I6400 CPU model
hw/mips_cmgcr: implement RESET_BASE register in CM GCR
hw/mips_cpc: make VP correctly start from the reset vector
target-mips: add exception base to MIPS CPU
hw/mips/cps: create GIC block inside CPS
hw/mips: implement Global Interrupt Controller
hw/mips: implement GIC Interval Timer
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* remotes/kraxel/tags/pull-usb-20160712-1:
xen-usb: Fix 32bit build
usb: add storage hotplug documentation
nec-usb-xhci: set the device state to USB_STATE_DEFAULT
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Peter Maydell [Tue, 12 Jul 2016 09:58:14 +0000 (10:58 +0100)]
Merge remote-tracking branch 'remotes/kraxel/tags/pull-input-20160712-1' into staging
msmouse: fix misc issues, switch to new input interface.
input: add trace events for full queues.
input-linux: better capability checks and event handling.
* remotes/kraxel/tags/pull-input-20160712-1:
input-linux: better capability checks, merge input_linux_event_{mouse, keyboard}
input-linux: factor out input_linux_handle_keyboard
input-linux: factor out input_linux_handle_mouse
input: add trace events for full queues
msmouse: send short messages if possible.
msmouse: switch to new input interface
msmouse: fix buffer handling
msmouse: add MouseState, unregister handler on close
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
* remotes/kraxel/tags/pull-vnc-20160712-1:
ui: avoid crash if vnc client disconnects with writes pending
vnc-enc-tight: use thread local storage for palette
vnc: fix incorrect checking condition when updating client
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Zhang Shuaiyi [Thu, 30 Jun 2016 03:50:40 +0000 (23:50 -0400)]
nec-usb-xhci: set the device state to USB_STATE_DEFAULT
This patch is a rough fix to "hw/usb/core.c:401: usb_handle_packet:
Assertion `dev->state == 3' failed.". Qemu will crash when a usb3
device redirect to Windows7 VM via nec-usb-xhci.
In extensible-host-controler-interface-usb-xhci.pdf P94(4.6.5
Address Device):
• If the Block Set Address Request (BSR) flag = ‘1’
• If the slot is in the Enabled state:
...
• Set the Slot State in the Output Slot Context to Default.
BSR = ‘1’: Enabled state to Default state; BSR = ‘0’: Default state
to Addressed state. Try to call usb_device_reset to set device state
to USB_STATE_DEFAULT in xhci_address_slot wether bsr is zero.
Paul Burton [Mon, 27 Jun 2016 15:19:11 +0000 (16:19 +0100)]
target-mips: support CP0.Config4.AE bit
The read-only Config4.AE bit set denotes extended 10 bits ASID.
Signed-off-by: Paul Burton <paul.burton@imgtec.com> Signed-off-by: James Hogan <james.hogan@imgtec.com> Signed-off-by: Leon Alrae <leon.alrae@imgtec.com>
Paul Burton [Mon, 27 Jun 2016 15:19:10 +0000 (16:19 +0100)]
target-mips: change ASID type to hold more than 8 bits
ASID currently has uint8_t type which is too small since some processors
support more than 8 bits ASID. Therefore change its type to uint16_t.
Signed-off-by: Paul Burton <paul.burton@imgtec.com> Signed-off-by: James Hogan <james.hogan@imgtec.com> Signed-off-by: Leon Alrae <leon.alrae@imgtec.com>
Paul Burton [Mon, 27 Jun 2016 15:19:09 +0000 (16:19 +0100)]
target-mips: add ASID mask field and replace magic values
Signed-off-by: Paul Burton <paul.burton@imgtec.com> Signed-off-by: James Hogan <james.hogan@imgtec.com> Signed-off-by: Leon Alrae <leon.alrae@imgtec.com>
Leon Alrae [Thu, 9 Jun 2016 09:46:52 +0000 (10:46 +0100)]
hw/mips_cmgcr: implement RESET_BASE register in CM GCR
Implement RESET_BASE register which is local to each VP and a write to
it changes VP's reset exception base. Also, add OTHER register to
allow a software running on one VP to access other VP's local registers.
Guest can use this mechanism to specify custom address from which a VP
will start execution.
Leon Alrae [Thu, 9 Jun 2016 09:46:51 +0000 (10:46 +0100)]
hw/mips_cpc: make VP correctly start from the reset vector
When VP enters the Run state it starts execution from the reset vector.
Currently used CPU_INTERRUPT_WAKE does not do that if reset exception
base has been modified. Therefore fix that by simply resetting given VP.
Drop the usage of CPU_INTERRUPT_WAKE also in VP_STOP and instead raise
the CPU_INTERRUPT_HALT to halt a VP.
Leon Alrae [Thu, 9 Jun 2016 09:46:50 +0000 (10:46 +0100)]
target-mips: add exception base to MIPS CPU
Replace hardcoded 0xbfc00000 with exception_base which is initialized with
this default address so there is no functional change here.
However, it is now exposed and consequently it will be possible to modify
it from outside of the CPU.
Yongbok Kim [Tue, 29 Mar 2016 02:35:51 +0000 (19:35 -0700)]
hw/mips: implement Global Interrupt Controller
The Global Interrupt Controller (GIC) is responsible for mapping each
internal and external interrupt to the correct location for servicing.
The internal representation of registers is different from the specification
in order to consolidate information for each GIC Interrupt Sources and Virtual
Processors with same functionalities. For example SH_MAP00_VP00 registers are
defined like each bit represents a VP but in this implementation the equivalent
map_vp contains VP number in integer form for ease accesses. When it is being
accessed via read write functions an internal data is converted back into the
original format as the specification.
Limitations:
Level triggering only
GIC CounterHi not implemented (Countbits = 32bits)
DINT not implemented
Local WatchDog, Fast Debug Channel, Perf Counter not implemented
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com> Signed-off-by: Leon Alrae <leon.alrae@imgtec.com>
Yongbok Kim [Tue, 29 Mar 2016 02:35:50 +0000 (19:35 -0700)]
hw/mips: implement GIC Interval Timer
The interval timer is similar to the CP0 Count/Compare timer within
each processor. The difference is the GIC_SH_COUNTER register is global
to the system so that all processors have the same time reference.
To ease implementation, all VPs are having its own QEMU timer but sharing
global settings and registers such as GIC_SH_CONFIG.COUTNSTOP and
GIC_SH_COUNTER.
MIPS GIC Interval Timer does support upto 64 bits of Count register but
in this implementation it is limited to 32 bits only.
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com> Signed-off-by: Leon Alrae <leon.alrae@imgtec.com>
Improve capability checks (count keys and buttons), store results.
Merge the input_linux_event_mouse and input_linux_event_keyboard
functions into one, dispatch into input_linux_handle_mouse and
input_linux_handle_keyboard depending on device capabilities.
Allow calling both handle functions, so we can handle mice which
also send key events, by routing those key events to the keyboard.
Gerd Hoffmann [Thu, 23 Jun 2016 09:51:35 +0000 (11:51 +0200)]
input: add trace events for full queues
It isn't unusual to happen, for example during reboot when the guest
doesn't reveice events for a while. So better don't flood stderr
with alarming messages. Turn them into tracepoints instead so they
can be enabled in case they are needed for trouble-shooting.
Keep track of button changes. Send the extended 4-byte messages for
three button mice only in case we have something to report for the
middle button. Use the short 3-byte messages (original protocol for
two-button microsoft mouse) otherwise.
The msmouse chardev backend writes data without checking whenever there
is enough space.
That happens to work with linux guests, probably by pure luck because
the linux driver enables the fifo and the serial port emulation accepts
more data than announced via qemu_chr_be_can_write() in that case.
Handle this properly by adding a buffer to MouseState. Hook up a
CharDriverState->accept_input() handler which feeds the buffer to the
serial port. msmouse_event() only fills the buffer now, and calls the
accept_input handler too to kick off the transmission.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com> Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1467625375-31774-3-git-send-email-kraxel@redhat.com
ui: avoid crash if vnc client disconnects with writes pending
The vnc_client_read() function is called from the vnc_client_io()
event handler callback when there is incoming data to process.
If it detects that the client has disconnected, then it will
trigger cleanup and free'ing of the VncState client struct at
a safe time.
Unfortunately, the vnc_client_io() event handler will also call
vnc_client_write() to handle any outgoing data writes. So if
vnc_client_io() was invoked with both G_IO_IN and G_IO_OUT
events set, and the client disconnects, we may try to write to
a client which has just been freed.
Peter Lieven [Thu, 30 Jun 2016 10:00:46 +0000 (12:00 +0200)]
vnc-enc-tight: use thread local storage for palette
currently the color counting palette is allocated from heap, used and destroyed
for each single subrect. Use a static palette per thread for this purpose and
avoid the malloc and free for each update.
Signed-off-by: Peter Lieven <pl@kamp.de> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-id: 1467280846-9674-1-git-send-email-pl@kamp.de Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
vnc: fix incorrect checking condition when updating client
vs->disconnecting is set to TRUE and vs->ioc is closed, but
vs->ioc isn't set to NULL, so that the vnc_disconnect_finish()
isn't invoked when you update client in vnc_update_client()
after vnc_disconnect_start invoked. Let's using change the checking
condition to avoid resource leak.
Signed-off-by: Haibin Wang <wanghaibin.wang@huawei.com> Signed-off-by: Gonglei <arei.gonglei@huawei.com> Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Message-id: 1467949056-81208-1-git-send-email-arei.gonglei@huawei.com Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Peter Maydell [Mon, 11 Jul 2016 17:46:38 +0000 (18:46 +0100)]
Merge remote-tracking branch 'remotes/cohuck/tags/s390x-20160711' into staging
Last round of s390x patches for 2.7:
- A large update of the s390x PCI code, bringing it in line with
the architecture
- Fixes and improvements in the ipl (boot) code
- Refactoring in the css code
* remotes/rth/tags/pull-tcg-20160708:
translate-all: Fix user-mode self-modifying code in 2 page long TB
cputlb: Fix for self-modifying writes across page boundaries
cputlb: Add address parameter to VICTIM_TLB_HIT
cputlb: Move VICTIM_TLB_HIT out of line
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Peter Maydell [Mon, 11 Jul 2016 14:08:47 +0000 (15:08 +0100)]
Merge remote-tracking branch 'remotes/ehabkost/tags/x86-pull-request' into staging
x86 and machine queue, 2016-07-07
Highlights:
* Improvements on global property error handling
* Translate -cpu options to global properties
* LMCE support
# gpg: Signature made Thu 07 Jul 2016 20:59:01 BST
# gpg: using RSA key 0x2807936F984DC5A6
# gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>"
# Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF D1AA 2807 936F 984D C5A6
* remotes/ehabkost/tags/x86-pull-request:
target-i386: Enable LMCE for '-cpu host' if supported by host
target-i386: Publish advised value of MSR_IA32_FEATURE_CONTROL via fw_cfg
target-i386: kvm: Add basic Intel LMCE support
target-i386: Report hyperv feature words through qom
target-i386: Show host and VM TSC frequencies on mismatch
pc: Parse CPU features only once
arm: virt: Parse cpu_model only once
cpu: Use CPUClass->parse_features() as convertor to global properties
target-i386: Avoid using locals outside their scope
target-i386: TCG can support CPUID.07H:EBX.erms
target-sparc: Use sparc_cpu_parse_features() directly
vl: Set errp to &error_abort on machine compat_props
machine: Add machine_register_compat_props() function
qdev: GlobalProperty.errp field
qdev: Eliminate qemu_add_globals() function
qdev: Don't stop applying globals on first error
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Commit "9d8256e virgl: pass whole GL scanout dimensions" missed the
opengl code path for gtk versions >= 3.16. Update that one too and
fix the build with recent gtk versions.
Reported-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Message-id: 1467876563-1351-1-git-send-email-kraxel@redhat.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Yi Min Zhao [Wed, 27 Apr 2016 09:44:17 +0000 (17:44 +0800)]
s390x/pci: make hot-unplug handler smoother
The current implementation of hot-unplug handler is abrupt. Any pci
operation will be just rejected if pci device is unconfigured. Thus a
pci device can not be reset or destroyed in a right, smooth and safe
way.
Improve this as follows:
- Notify the guest via a HP_EVENT_DECONFIGURE_REQUEST(0x303) event in
the unplug handler, giving it a chance to deconfigure the device via
sclp and allowing us to continue hot-unplug afterwards.
- Set up a timer that will generate the HP_EVENT_CONFIGURE_TO_STBRES
(0x304) event as before if the guest did not react after an adequate
time.
Signed-off-by: Yi Min Zhao <zyimin@linux.vnet.ibm.com> Reviewed-by: Pierre Morel <pmorel@linux.vnet.ibm.com> Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Yi Min Zhao [Fri, 6 May 2016 10:44:40 +0000 (18:44 +0800)]
s390x/pci: replace fid with idx in msg data of msix
Present code uses fid as the part of message data of msix for looking
up the specific zpci device. However it limits the usable range of fid,
and the code looking up the zpci device may fail due to truncation of
the fid.
In addition, fh is composed of enabled bit, FH_VIRT and the array index.
So we can use the array index as the identifier to store in msg data.
Signed-off-by: Yi Min Zhao <zyimin@linux.vnet.ibm.com> Reviewed-by: Pierre Morel <pmorel@linux.vnet.ibm.com> Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Yi Min Zhao [Wed, 15 Jun 2016 09:09:10 +0000 (17:09 +0800)]
s390x/pci: fix stpcifc_service_call
Firstly the function misses dmaas checking. This patch adds it.
Secondly the function uses s390_pci_find_dev_by_fh() to look up the
zpci device. This may fail if the guest provides a valid and disabled
fh but fh of the associated zpci device is enabled. Thus we use
s390_pci_find_dev_by_idx() instead.
Signed-off-by: Yi Min Zhao <zyimin@linux.vnet.ibm.com> Reviewed-by: Pierre Morel <pmorel@linux.vnet.ibm.com> Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Yi Min Zhao [Fri, 3 Jun 2016 07:16:01 +0000 (15:16 +0800)]
s390x/pci: refactor list_pci
Because of the refactor of s390_pci_find_dev_by_idx(), list_pci()
should be updated. We introduce a new function to get the next
available zpci device. It simplifies the code of looking up zpci
devices.
Signed-off-by: Yi Min Zhao <zyimin@linux.vnet.ibm.com> Acked-by: Pierre Morel <pmorel@linux.vnet.ibm.com> Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Yi Min Zhao [Fri, 3 Jun 2016 06:17:59 +0000 (14:17 +0800)]
s390x/pci: refactor s390_pci_find_dev_by_idx
s390_find_dev_by_idx() only indexes usable zpci devices. It implies
that the index value of each zpci device is dynamic and may change if
a new zpci device is plugged. So we have to use a constant index to
look up the device.
Signed-off-by: Yi Min Zhao <zyimin@linux.vnet.ibm.com> Reviewed-by: Pierre Morel <pmorel@linux.vnet.ibm.com> Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Yi Min Zhao [Wed, 15 Jun 2016 09:02:36 +0000 (17:02 +0800)]
s390x/pci: add checkings in CLP_SET_PCI_FN
The code in CLP_SET_PCI_FN case misses some checkings. Let's add
them.
Signed-off-by: Yi Min Zhao <zyimin@linux.vnet.ibm.com> Reviewed-by: Pierre Morel <pmorel@linux.vnet.ibm.com> Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Yi Min Zhao [Fri, 13 May 2016 04:50:09 +0000 (12:50 +0800)]
s390x/pci: enable zpci hot-plug/hot-unplug
We need to support hot-plug/hot-unplug for the new zpci devices as
well. This patch enables the present hot-plug/hot-unplug handlers
to support not only generic pci devices but also zpci devices.
Signed-off-by: Yi Min Zhao <zyimin@linux.vnet.ibm.com> Reviewed-by: Pierre Morel <pmorel@linux.vnet.ibm.com> Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>