- * qemuMonitorPrivatePtr: Job condition
+ * qemuMonitorPrivatePtr: Job conditions
Since virDomainObjPtr lock must not be held during sleeps, the job
- condition provides additional protection for code making updates.
+ conditions provide additional protection for code making updates.
+
+ Qemu driver uses two kinds of job conditions: asynchronous and
+ normal.
+
+ Asynchronous job condition is used for long running jobs (such as
+ migration) that consist of several monitor commands and it is
+ desirable to allow calling a limited set of other monitor commands
+ while such job is running. This allows clients to, e.g., query
+ statistical data, cancel the job, or change parameters of the job.
+
+ Normal job condition is used by all other jobs to get exclusive
+ access to the monitor and also by every monitor command issued by an
+ asynchronous job. When acquiring normal job condition, the job must
+ specify what kind of action it is about to take and this is checked
+ against the allowed set of jobs in case an asynchronous job is
+ running. If the job is incompatible with current asynchronous job,
+ it needs to wait until the asynchronous job ends and try to acquire
+ the job again.
Immediately after acquiring the virDomainObjPtr lock, any method
- which intends to update state must acquire the job condition. The
- virDomainObjPtr lock is released while blocking on this condition
- variable. Once the job condition is acquired, a method can safely
- release the virDomainObjPtr lock whenever it hits a piece of code
- which may sleep/wait, and re-acquire it after the sleep/wait.
+ which intends to update state must acquire either asynchronous or
+ normal job condition. The virDomainObjPtr lock is released while
+ blocking on these condition variables. Once the job condition is
+ acquired, a method can safely release the virDomainObjPtr lock
+ whenever it hits a piece of code which may sleep/wait, and
+ re-acquire it after the sleep/wait. Whenever an asynchronous job
+ wants to talk to the monitor, it needs to acquire nested job (a
+ special kind of normla job) to obtain exclusive access to the
+ monitor.
Since the virDomainObjPtr lock was dropped while waiting for the
job condition, it is possible that the domain is no longer active
-To acquire the job mutex
+To acquire the normal job condition
qemuDomainObjBeginJob() (if driver is unlocked)
- Increments ref count on virDomainObjPtr
- - Wait qemuDomainObjPrivate condition 'jobActive != 0' using
- virDomainObjPtr mutex
- - Sets jobActive to 1
+ - Waits until the job is compatible with current async job or no
+ async job is running
+ - Waits job.cond condition 'job.active != 0' using virDomainObjPtr
+ mutex
+ - Rechecks if the job is still compatible and repeats waiting if it
+ isn't
+ - Sets job.active to the job type
qemuDomainObjBeginJobWithDriver() (if driver needs to be locked)
- - Unlocks driver
- Increments ref count on virDomainObjPtr
- - Wait qemuDomainObjPrivate condition 'jobActive != 0' using
- virDomainObjPtr mutex
- - Sets jobActive to 1
+ - Unlocks driver
+ - Waits until the job is compatible with current async job or no
+ async job is running
+ - Waits job.cond condition 'job.active != 0' using virDomainObjPtr
+ mutex
+ - Rechecks if the job is still compatible and repeats waiting if it
+ isn't
+ - Sets job.active to the job type
- Unlocks virDomainObjPtr
- Locks driver
- Locks virDomainObjPtr
- NB: this variant is required in order to comply with lock ordering rules
- for virDomainObjPtr vs driver
+ NB: this variant is required in order to comply with lock ordering
+ rules for virDomainObjPtr vs driver
qemuDomainObjEndJob()
- - Set jobActive to 0
- - Signal on qemuDomainObjPrivate condition
+ - Sets job.active to 0
+ - Signals on job.cond condition
+ - Decrements ref count on virDomainObjPtr
+
+
+
+To acquire the asynchronous job condition
+
+ qemuDomainObjBeginAsyncJob() (if driver is unlocked)
+ - Increments ref count on virDomainObjPtr
+ - Waits until no async job is running
+ - Waits job.cond condition 'job.active != 0' using virDomainObjPtr
+ mutex
+ - Rechecks if any async job was started while waiting on job.cond
+ and repeats waiting in that case
+ - Sets job.asyncJob to the asynchronous job type
+
+ qemuDomainObjBeginAsyncJobWithDriver() (if driver needs to be locked)
+ - Increments ref count on virDomainObjPtr
+ - Unlocks driver
+ - Waits until no async job is running
+ - Waits job.cond condition 'job.active != 0' using virDomainObjPtr
+ mutex
+ - Rechecks if any async job was started while waiting on job.cond
+ and repeats waiting in that case
+ - Sets job.asyncJob to the asynchronous job type
+ - Unlocks virDomainObjPtr
+ - Locks driver
+ - Locks virDomainObjPtr
+
+ NB: this variant is required in order to comply with lock ordering
+ rules for virDomainObjPtr vs driver
+
+
+ qemuDomainObjEndAsyncJob()
+ - Sets job.asyncJob to 0
+ - Broadcasts on job.asyncCond condition
- Decrements ref count on virDomainObjPtr
NB: caller must take care to drop the driver lock if necessary
+ These functions automatically begin/end nested job if called inside an
+ asynchronous job. The caller must then check the return value of
+ qemuDomainObjEnterMonitor to detect if domain died while waiting on
+ the nested job.
+
To acquire the QEMU monitor lock with the driver lock held
NB: caller must take care to drop the driver lock if necessary
+ These functions automatically begin/end nested job if called inside an
+ asynchronous job. The caller must then check the return value of
+ qemuDomainObjEnterMonitorWithDriver to detect if domain died while
+ waiting on the nested job.
+
To keep a domain alive while waiting on a remote command, starting
with the driver lock held
obj = virDomainFindByUUID(driver->domains, dom->uuid);
qemuDriverUnlock(driver);
- qemuDomainObjBeginJob(obj);
+ qemuDomainObjBeginJob(obj, QEMU_JOB_TYPE);
...do work...
obj = virDomainFindByUUID(driver->domains, dom->uuid);
qemuDriverUnlock(driver);
- qemuDomainObjBeginJob(obj);
+ qemuDomainObjBeginJob(obj, QEMU_JOB_TYPE);
...do prep work...
if (virDomainObjIsActive(vm)) {
- qemuDomainObjEnterMonitor(obj);
+ ignore_value(qemuDomainObjEnterMonitor(obj));
qemuMonitorXXXX(priv->mon);
qemuDomainObjExitMonitor(obj);
}
qemuDriverLock(driver);
obj = virDomainFindByUUID(driver->domains, dom->uuid);
- qemuDomainObjBeginJobWithDriver(obj);
+ qemuDomainObjBeginJobWithDriver(obj, QEMU_JOB_TYPE);
...do prep work...
if (virDomainObjIsActive(vm)) {
- qemuDomainObjEnterMonitorWithDriver(driver, obj);
+ ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, obj));
qemuMonitorXXXX(priv->mon);
qemuDomainObjExitMonitorWithDriver(driver, obj);
}
qemuDriverUnlock(driver);
- * Coordinating with a remote server for migraion
+ * Running asynchronous job
+
+ virDomainObjPtr obj;
+ qemuDomainObjPrivatePtr priv;
+
+ qemuDriverLock(driver);
+ obj = virDomainFindByUUID(driver->domains, dom->uuid);
+
+ qemuDomainObjBeginAsyncJobWithDriver(obj, QEMU_ASYNC_JOB_TYPE);
+ qemuDomainObjSetAsyncJobMask(obj, allowedJobs);
+
+ ...do prep work...
+
+ if (qemuDomainObjEnterMonitorWithDriver(driver, obj) < 0) {
+ /* domain died in the meantime */
+ goto error;
+ }
+ ...start qemu job...
+ qemuDomainObjExitMonitorWithDriver(driver, obj);
+
+ while (!finished) {
+ if (qemuDomainObjEnterMonitorWithDriver(driver, obj) < 0) {
+ /* domain died in the meantime */
+ goto error;
+ }
+ ...monitor job progress...
+ qemuDomainObjExitMonitorWithDriver(driver, obj);
+
+ virDomainObjUnlock(obj);
+ sleep(aWhile);
+ virDomainObjLock(obj);
+ }
+
+ ...do final work...
+
+ qemuDomainObjEndAsyncJob(obj);
+ virDomainObjUnlock(obj);
+ qemuDriverUnlock(driver);
+
+
+ * Coordinating with a remote server for migration
virDomainObjPtr obj;
qemuDomainObjPrivatePtr priv;
qemuDriverLock(driver);
obj = virDomainFindByUUID(driver->domains, dom->uuid);
- qemuDomainObjBeginJobWithDriver(obj);
+ qemuDomainObjBeginAsyncJobWithDriver(obj, QEMU_ASYNC_JOB_TYPE);
...do prep work...
...do final work...
- qemuDomainObjEndJob(obj);
+ qemuDomainObjEndAsyncJob(obj);
virDomainObjUnlock(obj);
qemuDriverUnlock(driver);
if (virCondInit(&priv->job.cond) < 0)
return -1;
+ if (virCondInit(&priv->job.asyncCond) < 0) {
+ ignore_value(virCondDestroy(&priv->job.cond));
+ return -1;
+ }
+
if (virCondInit(&priv->job.signalCond) < 0) {
ignore_value(virCondDestroy(&priv->job.cond));
+ ignore_value(virCondDestroy(&priv->job.asyncCond));
return -1;
}
struct qemuDomainJobObj *job = &priv->job;
job->active = QEMU_JOB_NONE;
+}
+
+static void
+qemuDomainObjResetAsyncJob(qemuDomainObjPrivatePtr priv)
+{
+ struct qemuDomainJobObj *job = &priv->job;
+
+ job->asyncJob = QEMU_ASYNC_JOB_NONE;
+ job->mask = DEFAULT_JOB_MASK;
job->start = 0;
memset(&job->info, 0, sizeof(job->info));
job->signals = 0;
qemuDomainObjFreeJob(qemuDomainObjPrivatePtr priv)
{
ignore_value(virCondDestroy(&priv->job.cond));
+ ignore_value(virCondDestroy(&priv->job.asyncCond));
ignore_value(virCondDestroy(&priv->job.signalCond));
}
}
void
-qemuDomainObjDiscardJob(virDomainObjPtr obj)
+qemuDomainObjSetAsyncJobMask(virDomainObjPtr obj,
+ unsigned long long allowedJobs)
{
qemuDomainObjPrivatePtr priv = obj->privateData;
- qemuDomainObjResetJob(priv);
- qemuDomainObjSetJob(obj, QEMU_JOB_NONE);
+ if (!priv->job.asyncJob)
+ return;
+
+ priv->job.mask = allowedJobs | JOB_MASK(QEMU_JOB_DESTROY);
+}
+
+void
+qemuDomainObjDiscardAsyncJob(virDomainObjPtr obj)
+{
+ qemuDomainObjPrivatePtr priv = obj->privateData;
+
+ if (priv->job.active == QEMU_JOB_ASYNC_NESTED)
+ qemuDomainObjResetJob(priv);
+ qemuDomainObjResetAsyncJob(priv);
+}
+
+static bool
+qemuDomainJobAllowed(qemuDomainObjPrivatePtr priv, enum qemuDomainJob job)
+{
+ return !priv->job.asyncJob || (priv->job.mask & JOB_MASK(job)) != 0;
}
/* Give up waiting for mutex after 30 seconds */
static int
qemuDomainObjBeginJobInternal(struct qemud_driver *driver,
bool driver_locked,
- virDomainObjPtr obj)
+ virDomainObjPtr obj,
+ enum qemuDomainJob job,
+ enum qemuDomainAsyncJob asyncJob)
{
qemuDomainObjPrivatePtr priv = obj->privateData;
unsigned long long now;
unsigned long long then;
+ bool nested = job == QEMU_JOB_ASYNC_NESTED;
if (virTimeMs(&now) < 0)
return -1;
if (driver_locked)
qemuDriverUnlock(driver);
+retry:
+ while (!nested && !qemuDomainJobAllowed(priv, job)) {
+ if (virCondWaitUntil(&priv->job.asyncCond, &obj->lock, then) < 0)
+ goto error;
+ }
+
while (priv->job.active) {
- if (virCondWaitUntil(&priv->job.cond, &obj->lock, then) < 0) {
- if (errno == ETIMEDOUT)
- qemuReportError(VIR_ERR_OPERATION_TIMEOUT,
- "%s", _("cannot acquire state change lock"));
- else
- virReportSystemError(errno,
- "%s", _("cannot acquire job mutex"));
- if (driver_locked) {
- virDomainObjUnlock(obj);
- qemuDriverLock(driver);
- virDomainObjLock(obj);
- }
- /* Safe to ignore value since ref count was incremented above */
- ignore_value(virDomainObjUnref(obj));
- return -1;
- }
+ if (virCondWaitUntil(&priv->job.cond, &obj->lock, then) < 0)
+ goto error;
}
+
+ /* No job is active but a new async job could have been started while obj
+ * was unlocked, so we need to recheck it. */
+ if (!nested && !qemuDomainJobAllowed(priv, job))
+ goto retry;
+
qemuDomainObjResetJob(priv);
- qemuDomainObjSetJob(obj, QEMU_JOB_UNSPECIFIED);
- priv->job.start = now;
+
+ if (job != QEMU_JOB_ASYNC) {
+ priv->job.active = job;
+ } else {
+ qemuDomainObjResetAsyncJob(priv);
+ priv->job.asyncJob = asyncJob;
+ priv->job.start = now;
+ }
if (driver_locked) {
virDomainObjUnlock(obj);
}
return 0;
+
+error:
+ if (errno == ETIMEDOUT)
+ qemuReportError(VIR_ERR_OPERATION_TIMEOUT,
+ "%s", _("cannot acquire state change lock"));
+ else
+ virReportSystemError(errno,
+ "%s", _("cannot acquire job mutex"));
+ if (driver_locked) {
+ virDomainObjUnlock(obj);
+ qemuDriverLock(driver);
+ virDomainObjLock(obj);
+ }
+ /* Safe to ignore value since ref count was incremented above */
+ ignore_value(virDomainObjUnref(obj));
+ return -1;
}
/*
* Upon successful return, the object will have its ref count increased,
* successful calls must be followed by EndJob eventually
*/
-int qemuDomainObjBeginJob(virDomainObjPtr obj)
+int qemuDomainObjBeginJob(virDomainObjPtr obj, enum qemuDomainJob job)
+{
+ return qemuDomainObjBeginJobInternal(NULL, false, obj, job,
+ QEMU_ASYNC_JOB_NONE);
+}
+
+int qemuDomainObjBeginAsyncJob(virDomainObjPtr obj,
+ enum qemuDomainAsyncJob asyncJob)
{
- return qemuDomainObjBeginJobInternal(NULL, false, obj);
+ return qemuDomainObjBeginJobInternal(NULL, false, obj, QEMU_JOB_ASYNC,
+ asyncJob);
}
/*
* successful calls must be followed by EndJob eventually
*/
int qemuDomainObjBeginJobWithDriver(struct qemud_driver *driver,
- virDomainObjPtr obj)
+ virDomainObjPtr obj,
+ enum qemuDomainJob job)
{
- return qemuDomainObjBeginJobInternal(driver, true, obj);
+ if (job <= QEMU_JOB_NONE || job >= QEMU_JOB_ASYNC) {
+ qemuReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+ _("Attempt to start invalid job"));
+ return -1;
+ }
+
+ return qemuDomainObjBeginJobInternal(driver, true, obj, job,
+ QEMU_ASYNC_JOB_NONE);
+}
+
+int qemuDomainObjBeginAsyncJobWithDriver(struct qemud_driver *driver,
+ virDomainObjPtr obj,
+ enum qemuDomainAsyncJob asyncJob)
+{
+ return qemuDomainObjBeginJobInternal(driver, true, obj, QEMU_JOB_ASYNC,
+ asyncJob);
+}
+
+/*
+ * Use this to protect monitor sections within active async job.
+ *
+ * The caller must call qemuDomainObjBeginAsyncJob{,WithDriver} before it can
+ * use this method. Never use this method if you only own non-async job, use
+ * qemuDomainObjBeginJob{,WithDriver} instead.
+ */
+int
+qemuDomainObjBeginNestedJob(virDomainObjPtr obj)
+{
+ return qemuDomainObjBeginJobInternal(NULL, false, obj,
+ QEMU_JOB_ASYNC_NESTED,
+ QEMU_ASYNC_JOB_NONE);
+}
+
+int
+qemuDomainObjBeginNestedJobWithDriver(struct qemud_driver *driver,
+ virDomainObjPtr obj)
+{
+ return qemuDomainObjBeginJobInternal(driver, true, obj,
+ QEMU_JOB_ASYNC_NESTED,
+ QEMU_ASYNC_JOB_NONE);
}
/*
qemuDomainObjPrivatePtr priv = obj->privateData;
qemuDomainObjResetJob(priv);
- qemuDomainObjSetJob(obj, QEMU_JOB_NONE);
virCondSignal(&priv->job.cond);
return virDomainObjUnref(obj);
}
+int
+qemuDomainObjEndAsyncJob(virDomainObjPtr obj)
+{
+ qemuDomainObjPrivatePtr priv = obj->privateData;
-static void
+ qemuDomainObjResetAsyncJob(priv);
+ virCondBroadcast(&priv->job.asyncCond);
+
+ return virDomainObjUnref(obj);
+}
+
+void
+qemuDomainObjEndNestedJob(virDomainObjPtr obj)
+{
+ qemuDomainObjPrivatePtr priv = obj->privateData;
+
+ qemuDomainObjResetJob(priv);
+ virCondSignal(&priv->job.cond);
+
+ /* safe to ignore since the surrounding async job increased the reference
+ * counter as well */
+ ignore_value(virDomainObjUnref(obj));
+}
+
+
+static int
qemuDomainObjEnterMonitorInternal(struct qemud_driver *driver,
virDomainObjPtr obj)
{
qemuDomainObjPrivatePtr priv = obj->privateData;
+ if (priv->job.active == QEMU_JOB_NONE && priv->job.asyncJob) {
+ if (qemuDomainObjBeginNestedJob(obj) < 0)
+ return -1;
+ if (!virDomainObjIsActive(obj)) {
+ qemuReportError(VIR_ERR_OPERATION_FAILED, "%s",
+ _("domain is no longer running"));
+ return -1;
+ }
+ }
+
qemuMonitorLock(priv->mon);
qemuMonitorRef(priv->mon);
ignore_value(virTimeMs(&priv->monStart));
virDomainObjUnlock(obj);
if (driver)
qemuDriverUnlock(driver);
+
+ return 0;
}
static void
if (refs == 0) {
priv->mon = NULL;
}
+
+ if (priv->job.active == QEMU_JOB_ASYNC_NESTED)
+ qemuDomainObjEndNestedJob(obj);
}
/*
* obj must be locked before calling, qemud_driver must be unlocked
*
* To be called immediately before any QEMU monitor API call
- * Must have already called qemuDomainObjBeginJob(), and checked
- * that the VM is still active.
+ * Must have already either called qemuDomainObjBeginJob() and checked
+ * that the VM is still active or called qemuDomainObjBeginAsyncJob, in which
+ * case this will call qemuDomainObjBeginNestedJob.
*
* To be followed with qemuDomainObjExitMonitor() once complete
*/
-void qemuDomainObjEnterMonitor(virDomainObjPtr obj)
+int qemuDomainObjEnterMonitor(virDomainObjPtr obj)
{
- qemuDomainObjEnterMonitorInternal(NULL, obj);
+ return qemuDomainObjEnterMonitorInternal(NULL, obj);
}
/* obj must NOT be locked before calling, qemud_driver must be unlocked
* obj must be locked before calling, qemud_driver must be locked
*
* To be called immediately before any QEMU monitor API call
- * Must have already called qemuDomainObjBeginJob().
+ * Must have already either called qemuDomainObjBeginJobWithDriver() and
+ * checked that the VM is still active or called qemuDomainObjBeginAsyncJob,
+ * in which case this will call qemuDomainObjBeginNestedJobWithDriver.
*
* To be followed with qemuDomainObjExitMonitorWithDriver() once complete
*/
-void qemuDomainObjEnterMonitorWithDriver(struct qemud_driver *driver,
- virDomainObjPtr obj)
+int qemuDomainObjEnterMonitorWithDriver(struct qemud_driver *driver,
+ virDomainObjPtr obj)
{
- qemuDomainObjEnterMonitorInternal(driver, obj);
+ return qemuDomainObjEnterMonitorInternal(driver, obj);
}
/* obj must NOT be locked before calling, qemud_driver must be unlocked,
(1 << VIR_DOMAIN_VIRT_KVM) | \
(1 << VIR_DOMAIN_VIRT_XEN))
+# define JOB_MASK(job) (1 << (job - 1))
+# define DEFAULT_JOB_MASK \
+ (JOB_MASK(QEMU_JOB_QUERY) | JOB_MASK(QEMU_JOB_DESTROY))
+
/* Only 1 job is allowed at any time
* A job includes *all* monitor commands, even those just querying
* information, not merely actions */
enum qemuDomainJob {
QEMU_JOB_NONE = 0, /* Always set to 0 for easy if (jobActive) conditions */
- QEMU_JOB_UNSPECIFIED,
- QEMU_JOB_MIGRATION_OUT,
- QEMU_JOB_MIGRATION_IN,
- QEMU_JOB_SAVE,
- QEMU_JOB_DUMP,
+ QEMU_JOB_QUERY, /* Doesn't change any state */
+ QEMU_JOB_DESTROY, /* Destroys the domain (cannot be masked out) */
+ QEMU_JOB_SUSPEND, /* Suspends (stops vCPUs) the domain */
+ QEMU_JOB_MODIFY, /* May change state */
+
+ /* The following two items must always be the last items */
+ QEMU_JOB_ASYNC, /* Asynchronous job */
+ QEMU_JOB_ASYNC_NESTED, /* Normal job within an async job */
+};
+
+/* Async job consists of a series of jobs that may change state. Independent
+ * jobs that do not change state (and possibly others if explicitly allowed by
+ * current async job) are allowed to be run even if async job is active.
+ */
+enum qemuDomainAsyncJob {
+ QEMU_ASYNC_JOB_NONE = 0,
+ QEMU_ASYNC_JOB_MIGRATION_OUT,
+ QEMU_ASYNC_JOB_MIGRATION_IN,
+ QEMU_ASYNC_JOB_SAVE,
+ QEMU_ASYNC_JOB_DUMP,
};
enum qemuDomainJobSignals {
};
struct qemuDomainJobObj {
- virCond cond; /* Use in conjunction with main virDomainObjPtr lock */
- virCond signalCond; /* Use to coordinate the safe queries during migration */
-
- enum qemuDomainJob active; /* Currently running job */
+ virCond cond; /* Use to coordinate jobs */
+ enum qemuDomainJob active; /* Currently running job */
- unsigned long long start; /* When the job started */
- virDomainJobInfo info; /* Progress data */
+ virCond asyncCond; /* Use to coordinate with async jobs */
+ enum qemuDomainAsyncJob asyncJob; /* Currently active async job */
+ unsigned long long mask; /* Jobs allowed during async job */
+ unsigned long long start; /* When the async job started */
+ virDomainJobInfo info; /* Async job progress data */
+ virCond signalCond; /* Use to coordinate the safe queries during migration */
unsigned int signals; /* Signals for running job */
struct qemuDomainJobSignalsData signalsData; /* Signal specific data */
};
void qemuDomainSetPrivateDataHooks(virCapsPtr caps);
void qemuDomainSetNamespaceHooks(virCapsPtr caps);
-int qemuDomainObjBeginJob(virDomainObjPtr obj) ATTRIBUTE_RETURN_CHECK;
+int qemuDomainObjBeginJob(virDomainObjPtr obj,
+ enum qemuDomainJob job)
+ ATTRIBUTE_RETURN_CHECK;
+int qemuDomainObjBeginAsyncJob(virDomainObjPtr obj,
+ enum qemuDomainAsyncJob asyncJob)
+ ATTRIBUTE_RETURN_CHECK;
+int qemuDomainObjBeginNestedJob(virDomainObjPtr obj)
+ ATTRIBUTE_RETURN_CHECK;
int qemuDomainObjBeginJobWithDriver(struct qemud_driver *driver,
- virDomainObjPtr obj) ATTRIBUTE_RETURN_CHECK;
-int qemuDomainObjEndJob(virDomainObjPtr obj) ATTRIBUTE_RETURN_CHECK;
+ virDomainObjPtr obj,
+ enum qemuDomainJob job)
+ ATTRIBUTE_RETURN_CHECK;
+int qemuDomainObjBeginAsyncJobWithDriver(struct qemud_driver *driver,
+ virDomainObjPtr obj,
+ enum qemuDomainAsyncJob asyncJob)
+ ATTRIBUTE_RETURN_CHECK;
+int qemuDomainObjBeginNestedJobWithDriver(struct qemud_driver *driver,
+ virDomainObjPtr obj)
+ ATTRIBUTE_RETURN_CHECK;
+
+int qemuDomainObjEndJob(virDomainObjPtr obj)
+ ATTRIBUTE_RETURN_CHECK;
+int qemuDomainObjEndAsyncJob(virDomainObjPtr obj)
+ ATTRIBUTE_RETURN_CHECK;
+void qemuDomainObjEndNestedJob(virDomainObjPtr obj);
void qemuDomainObjSetJob(virDomainObjPtr obj, enum qemuDomainJob job);
-void qemuDomainObjDiscardJob(virDomainObjPtr obj);
+void qemuDomainObjSetAsyncJobMask(virDomainObjPtr obj,
+ unsigned long long allowedJobs);
+void qemuDomainObjDiscardAsyncJob(virDomainObjPtr obj);
-void qemuDomainObjEnterMonitor(virDomainObjPtr obj);
+int qemuDomainObjEnterMonitor(virDomainObjPtr obj)
+ ATTRIBUTE_RETURN_CHECK;
void qemuDomainObjExitMonitor(virDomainObjPtr obj);
-void qemuDomainObjEnterMonitorWithDriver(struct qemud_driver *driver,
- virDomainObjPtr obj);
+int qemuDomainObjEnterMonitorWithDriver(struct qemud_driver *driver,
+ virDomainObjPtr obj)
+ ATTRIBUTE_RETURN_CHECK;
void qemuDomainObjExitMonitorWithDriver(struct qemud_driver *driver,
virDomainObjPtr obj);
void qemuDomainObjEnterRemoteWithDriver(struct qemud_driver *driver,
virDomainObjLock(vm);
virResetLastError();
- if (qemuDomainObjBeginJobWithDriver(data->driver, vm) < 0) {
+ if (qemuDomainObjBeginJobWithDriver(data->driver, vm,
+ QEMU_JOB_MODIFY) < 0) {
err = virGetLastError();
VIR_ERROR(_("Failed to start job on VM '%s': %s"),
vm->def->name,
def = NULL;
- if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
+ if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_MODIFY) < 0)
goto cleanup; /* XXXX free the 'vm' we created ? */
if (qemuProcessStart(conn, driver, vm, NULL,
priv = vm->privateData;
- if (priv->job.active == QEMU_JOB_MIGRATION_OUT) {
+ if (priv->job.asyncJob == QEMU_ASYNC_JOB_MIGRATION_OUT) {
if (virDomainObjGetState(vm, NULL) != VIR_DOMAIN_PAUSED) {
VIR_DEBUG("Requesting domain pause on %s",
vm->def->name);
ret = 0;
goto cleanup;
} else {
- if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
+ if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_SUSPEND) < 0)
goto cleanup;
if (!virDomainObjIsActive(vm)) {
goto cleanup;
}
- if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
+ if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_MODIFY) < 0)
goto cleanup;
if (!virDomainObjIsActive(vm)) {
goto cleanup;
}
- if (qemuDomainObjBeginJob(vm) < 0)
+ if (qemuDomainObjBeginJob(vm, QEMU_JOB_MODIFY) < 0)
goto cleanup;
if (!virDomainObjIsActive(vm)) {
}
priv = vm->privateData;
- qemuDomainObjEnterMonitor(vm);
+ ignore_value(qemuDomainObjEnterMonitor(vm));
ret = qemuMonitorSystemPowerdown(priv->mon);
qemuDomainObjExitMonitor(vm);
#if HAVE_YAJL
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_MONITOR_JSON)) {
- if (qemuDomainObjBeginJob(vm) < 0)
+ if (qemuDomainObjBeginJob(vm, QEMU_JOB_MODIFY) < 0)
goto cleanup;
if (!virDomainObjIsActive(vm)) {
goto endjob;
}
- qemuDomainObjEnterMonitor(vm);
+ ignore_value(qemuDomainObjEnterMonitor(vm));
ret = qemuMonitorSystemPowerdown(priv->mon);
qemuDomainObjExitMonitor(vm);
*/
qemuProcessKill(vm);
- if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
+ if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_DESTROY) < 0)
goto cleanup;
if (!virDomainObjIsActive(vm)) {
goto cleanup;
}
- if (qemuDomainObjBeginJob(vm) < 0)
+ if (qemuDomainObjBeginJob(vm, QEMU_JOB_MODIFY) < 0)
goto cleanup;
isActive = virDomainObjIsActive(vm);
if (flags & VIR_DOMAIN_AFFECT_LIVE) {
priv = vm->privateData;
- qemuDomainObjEnterMonitor(vm);
+ ignore_value(qemuDomainObjEnterMonitor(vm));
r = qemuMonitorSetBalloon(priv->mon, newmem);
qemuDomainObjExitMonitor(vm);
virDomainAuditMemory(vm, vm->def->mem.cur_balloon, newmem, "update",
priv = vm->privateData;
- if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
+ if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_MODIFY) < 0)
goto cleanup;
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
ret = qemuMonitorInjectNMI(priv->mon);
qemuDomainObjExitMonitorWithDriver(driver, vm);
if (qemuDomainObjEndJob(vm) == 0) {
(vm->def->memballoon->model == VIR_DOMAIN_MEMBALLOON_MODEL_NONE)) {
info->memory = vm->def->mem.max_balloon;
} else if (!priv->job.active) {
- if (qemuDomainObjBeginJob(vm) < 0)
+ if (qemuDomainObjBeginJob(vm, QEMU_JOB_QUERY) < 0)
goto cleanup;
if (!virDomainObjIsActive(vm))
err = 0;
else {
- qemuDomainObjEnterMonitor(vm);
+ ignore_value(qemuDomainObjEnterMonitor(vm));
err = qemuMonitorGetBalloonInfo(priv->mon, &balloon);
qemuDomainObjExitMonitor(vm);
}
priv = vm->privateData;
- if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
+ if (qemuDomainObjBeginAsyncJobWithDriver(driver, vm,
+ QEMU_ASYNC_JOB_SAVE) < 0)
goto cleanup;
- qemuDomainObjSetJob(vm, QEMU_JOB_SAVE);
-
memset(&priv->job.info, 0, sizeof(priv->job.info));
priv->job.info.type = VIR_DOMAIN_JOB_UNBOUNDED;
VIR_DOMAIN_EVENT_STOPPED,
VIR_DOMAIN_EVENT_STOPPED_SAVED);
if (!vm->persistent) {
- if (qemuDomainObjEndJob(vm) > 0)
+ if (qemuDomainObjEndAsyncJob(vm) > 0)
virDomainRemoveInactive(&driver->domains,
vm);
vm = NULL;
VIR_WARN("Unable to resume guest CPUs after save failure");
}
}
- if (qemuDomainObjEndJob(vm) == 0)
+ if (qemuDomainObjEndAsyncJob(vm) == 0)
vm = NULL;
}
}
priv = vm->privateData;
- if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
+ if (qemuDomainObjBeginAsyncJobWithDriver(driver, vm,
+ QEMU_ASYNC_JOB_DUMP) < 0)
goto cleanup;
if (!virDomainObjIsActive(vm)) {
goto endjob;
}
- qemuDomainObjSetJob(vm, QEMU_JOB_DUMP);
-
/* Migrate will always stop the VM, so the resume condition is
independent of whether the stop command is issued. */
resume = virDomainObjGetState(vm, NULL) == VIR_DOMAIN_RUNNING;
}
}
- if (qemuDomainObjEndJob(vm) == 0)
+ if (qemuDomainObjEndAsyncJob(vm) == 0)
vm = NULL;
else if ((ret == 0) && (flags & VIR_DUMP_CRASH) && !vm->persistent) {
virDomainRemoveInactive(&driver->domains,
priv = vm->privateData;
- if (qemuDomainObjBeginJob(vm) < 0)
+ if (qemuDomainObjBeginJob(vm, QEMU_JOB_QUERY) < 0)
goto cleanup;
if (!virDomainObjIsActive(vm)) {
virSecurityManagerSetSavedStateLabel(qemu_driver->securityManager, vm, tmp);
- qemuDomainObjEnterMonitor(vm);
+ ignore_value(qemuDomainObjEnterMonitor(vm));
if (qemuMonitorScreendump(priv->mon, tmp) < 0) {
qemuDomainObjExitMonitor(vm);
goto endjob;
goto unlock;
}
- if (qemuDomainObjBeginJobWithDriver(driver, wdEvent->vm) < 0) {
+ if (qemuDomainObjBeginAsyncJobWithDriver(driver, wdEvent->vm,
+ QEMU_ASYNC_JOB_DUMP) < 0) {
VIR_FREE(dumpfile);
goto unlock;
}
/* Safe to ignore value since ref count was incremented in
* qemuProcessHandleWatchdog().
*/
- ignore_value(qemuDomainObjEndJob(wdEvent->vm));
+ ignore_value(qemuDomainObjEndAsyncJob(wdEvent->vm));
unlock:
if (virDomainObjUnref(wdEvent->vm) > 0)
int oldvcpus = vm->def->vcpus;
int vcpus = oldvcpus;
- qemuDomainObjEnterMonitor(vm);
+ ignore_value(qemuDomainObjEnterMonitor(vm));
/* We need different branches here, because we want to offline
* in reverse order to onlining, so any partial fail leaves us in a
goto cleanup;
}
- if (qemuDomainObjBeginJob(vm) < 0)
+ if (qemuDomainObjBeginJob(vm, QEMU_JOB_MODIFY) < 0)
goto cleanup;
if (!virDomainObjIsActive(vm) && (flags & VIR_DOMAIN_AFFECT_LIVE)) {
}
def = NULL;
- if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
+ if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_MODIFY) < 0)
goto cleanup;
ret = qemuDomainSaveImageStartVM(conn, driver, vm, &fd, &header, path);
/* Don't delay if someone's using the monitor, just use
* existing most recent data instead */
if (!priv->job.active) {
- if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
+ if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_QUERY) < 0)
goto cleanup;
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
err = qemuMonitorGetBalloonInfo(priv->mon, &balloon);
qemuDomainObjExitMonitorWithDriver(driver, vm);
if (qemuDomainObjEndJob(vm) == 0) {
goto cleanup;
}
- if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
+ if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_MODIFY) < 0)
goto cleanup;
if (virDomainObjIsActive(vm)) {
goto cleanup;
}
- if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
+ if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_MODIFY) < 0)
goto cleanup;
if (virDomainObjIsActive(vm)) {
}
priv = vm->privateData;
- if ((priv->job.active == QEMU_JOB_MIGRATION_OUT)
- || (priv->job.active == QEMU_JOB_SAVE)) {
+ if ((priv->job.asyncJob == QEMU_ASYNC_JOB_MIGRATION_OUT)
+ || (priv->job.asyncJob == QEMU_ASYNC_JOB_SAVE)) {
virDomainObjRef(vm);
while (priv->job.signals & QEMU_JOB_SIGNAL_BLKSTAT)
ignore_value(virCondWait(&priv->job.signalCond, &vm->lock));
if (virDomainObjUnref(vm) == 0)
vm = NULL;
} else {
- if (qemuDomainObjBeginJob(vm) < 0)
+ if (qemuDomainObjBeginJob(vm, QEMU_JOB_QUERY) < 0)
goto cleanup;
if (!virDomainObjIsActive(vm)) {
goto endjob;
}
- qemuDomainObjEnterMonitor(vm);
+ ignore_value(qemuDomainObjEnterMonitor(vm));
ret = qemuMonitorGetBlockStatsInfo(priv->mon,
disk->info.alias,
&stats->rd_req,
goto cleanup;
}
- if (qemuDomainObjBeginJob(vm) < 0)
+ if (qemuDomainObjBeginJob(vm, QEMU_JOB_QUERY) < 0)
goto cleanup;
if (virDomainObjIsActive(vm)) {
qemuDomainObjPrivatePtr priv = vm->privateData;
- qemuDomainObjEnterMonitor(vm);
+ ignore_value(qemuDomainObjEnterMonitor(vm));
ret = qemuMonitorGetMemoryStats(priv->mon, stats, nr_stats);
qemuDomainObjExitMonitor(vm);
} else {
goto cleanup;
}
- if (qemuDomainObjBeginJob(vm) < 0)
+ if (qemuDomainObjBeginJob(vm, QEMU_JOB_QUERY) < 0)
goto cleanup;
if (!virDomainObjIsActive(vm)) {
virSecurityManagerSetSavedStateLabel(qemu_driver->securityManager, vm, tmp);
priv = vm->privateData;
- qemuDomainObjEnterMonitor(vm);
+ ignore_value(qemuDomainObjEnterMonitor(vm));
if (flags == VIR_MEMORY_VIRTUAL) {
if (qemuMonitorSaveVirtualMemory(priv->mon, offset, size, tmp) < 0) {
qemuDomainObjExitMonitor(vm);
virDomainObjIsActive(vm)) {
qemuDomainObjPrivatePtr priv = vm->privateData;
- if ((priv->job.active == QEMU_JOB_MIGRATION_OUT)
- || (priv->job.active == QEMU_JOB_SAVE)) {
+ if ((priv->job.asyncJob == QEMU_ASYNC_JOB_MIGRATION_OUT)
+ || (priv->job.asyncJob == QEMU_ASYNC_JOB_SAVE)) {
virDomainObjRef(vm);
while (priv->job.signals & QEMU_JOB_SIGNAL_BLKINFO)
ignore_value(virCondWait(&priv->job.signalCond, &vm->lock));
if (virDomainObjUnref(vm) == 0)
vm = NULL;
} else {
- if (qemuDomainObjBeginJob(vm) < 0)
+ if (qemuDomainObjBeginJob(vm, QEMU_JOB_QUERY) < 0)
goto cleanup;
if (virDomainObjIsActive(vm)) {
- qemuDomainObjEnterMonitor(vm);
+ ignore_value(qemuDomainObjEnterMonitor(vm));
ret = qemuMonitorGetBlockExtent(priv->mon,
disk->info.alias,
&info->allocation);
goto cleanup;
}
- if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
+ if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_MODIFY) < 0)
goto cleanup;
ret = qemuMigrationConfirm(driver, domain->conn, vm,
priv = vm->privateData;
if (virDomainObjIsActive(vm)) {
- if (priv->job.active) {
+ if (priv->job.asyncJob) {
memcpy(info, &priv->job.info, sizeof(*info));
/* Refresh elapsed time again just to ensure it
priv = vm->privateData;
if (virDomainObjIsActive(vm)) {
- if (priv->job.active) {
+ if (priv->job.asyncJob) {
VIR_DEBUG("Requesting cancellation of job on vm %s", vm->def->name);
priv->job.signals |= QEMU_JOB_SIGNAL_CANCEL;
} else {
priv = vm->privateData;
- if (priv->job.active != QEMU_JOB_MIGRATION_OUT) {
+ if (priv->job.asyncJob != QEMU_ASYNC_JOB_MIGRATION_OUT) {
qemuReportError(VIR_ERR_OPERATION_INVALID,
"%s", _("domain is not being migrated"));
goto cleanup;
priv = vm->privateData;
- if (priv->job.active != QEMU_JOB_MIGRATION_OUT) {
+ if (priv->job.asyncJob != QEMU_ASYNC_JOB_MIGRATION_OUT) {
qemuReportError(VIR_ERR_OPERATION_INVALID,
"%s", _("domain is not being migrated"));
goto cleanup;
bool resume = false;
int ret = -1;
- if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
+ if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_MODIFY) < 0)
return -1;
if (virDomainObjGetState(vm, NULL) == VIR_DOMAIN_RUNNING) {
}
}
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
ret = qemuMonitorCreateSnapshot(priv->mon, snap->def->name);
qemuDomainObjExitMonitorWithDriver(driver, vm);
vm->current_snapshot = snap;
- if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
+ if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_MODIFY) < 0)
goto cleanup;
if (snap->def->state == VIR_DOMAIN_RUNNING
if (virDomainObjIsActive(vm)) {
priv = vm->privateData;
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
rc = qemuMonitorLoadSnapshot(priv->mon, snap->def->name);
qemuDomainObjExitMonitorWithDriver(driver, vm);
if (rc < 0)
}
else {
priv = vm->privateData;
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
/* we continue on even in the face of error */
qemuMonitorDeleteSnapshot(priv->mon, snap->def->name);
qemuDomainObjExitMonitorWithDriver(driver, vm);
goto cleanup;
}
- if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
+ if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_MODIFY) < 0)
goto cleanup;
if (flags & VIR_DOMAIN_SNAPSHOT_DELETE_CHILDREN) {
hmp = !!(flags & VIR_DOMAIN_QEMU_MONITOR_COMMAND_HMP);
- if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
+ if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_MODIFY) < 0)
goto cleanup;
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
ret = qemuMonitorArbitraryCommand(priv->mon, cmd, result, hmp);
qemuDomainObjExitMonitorWithDriver(driver, vm);
if (qemuDomainObjEndJob(vm) == 0) {
def = NULL;
- if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
+ if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_MODIFY) < 0)
goto cleanup;
if (qemuProcessAttach(conn, driver, vm, pid,
if (!(driveAlias = qemuDeviceDriveHostAlias(origdisk, priv->qemuCaps)))
goto error;
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
if (disk->src) {
const char *format = NULL;
if (disk->type != VIR_DOMAIN_DISK_TYPE_DIR) {
goto error;
}
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
ret = qemuMonitorAddDrive(priv->mon, drivestr);
if (ret == 0) {
goto cleanup;
}
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
ret = qemuMonitorAddDevice(priv->mon, devstr);
} else {
goto error;
}
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
ret = qemuMonitorAddDrive(priv->mon, drivestr);
if (ret == 0) {
goto error;
}
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
ret = qemuMonitorAddDrive(priv->mon, drivestr);
if (ret == 0) {
goto cleanup;
}
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_NETDEV) &&
qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
if (qemuMonitorAddNetdev(priv->mon, netstr, tapfd, tapfd_name,
goto try_remove;
}
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
if (qemuMonitorAddDevice(priv->mon, nicstr) < 0) {
qemuDomainObjExitMonitorWithDriver(driver, vm);
char *netdev_name;
if (virAsprintf(&netdev_name, "host%s", net->info.alias) < 0)
goto no_memory;
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
if (qemuMonitorRemoveNetdev(priv->mon, netdev_name) < 0)
VIR_WARN("Failed to remove network backend for netdev %s",
netdev_name);
char *hostnet_name;
if (virAsprintf(&hostnet_name, "host%s", net->info.alias) < 0)
goto no_memory;
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
if (qemuMonitorRemoveHostNetwork(priv->mon, vlan, hostnet_name) < 0)
VIR_WARN("Failed to remove network backend for vlan %d, net %s",
vlan, hostnet_name);
priv->qemuCaps)))
goto error;
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
ret = qemuMonitorAddDeviceWithFd(priv->mon, devstr,
configfd, configfd_name);
qemuDomainObjExitMonitorWithDriver(driver, vm);
} else {
virDomainDevicePCIAddress guestAddr;
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
ret = qemuMonitorAddPCIHostDevice(priv->mon,
&hostdev->source.subsys.u.pci,
&guestAddr);
goto error;
}
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE))
ret = qemuMonitorAddDevice(priv->mon, devstr);
else
goto cleanup;
}
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
if (qemuMonitorDelDevice(priv->mon, detach->info.alias) < 0) {
qemuDomainObjExitMonitor(vm);
goto cleanup;
}
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
if (qemuMonitorDelDevice(priv->mon, detach->info.alias) < 0) {
qemuDomainObjExitMonitor(vm);
virDomainAuditDisk(vm, detach, NULL, "detach", false);
goto cleanup;
}
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
if (qemuMonitorDelDevice(priv->mon, detach->info.alias)) {
qemuDomainObjExitMonitor(vm);
goto cleanup;
}
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
if (qemuMonitorDelDevice(priv->mon, detach->info.alias) < 0) {
qemuDomainObjExitMonitor(vm);
return -1;
}
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
if (qemuCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE)) {
ret = qemuMonitorDelDevice(priv->mon, detach->info.alias);
} else {
return -1;
}
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
ret = qemuMonitorDelDevice(priv->mon, detach->info.alias);
qemuDomainObjExitMonitorWithDriver(driver, vm);
virDomainAuditHostdev(vm, detach, "detach", ret == 0);
if (auth->connected)
connected = virDomainGraphicsAuthConnectedTypeToString(auth->connected);
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
ret = qemuMonitorSetPassword(priv->mon,
type,
auth->passwd ? auth->passwd : defaultPasswd,
if (priv->job.signals & QEMU_JOB_SIGNAL_CANCEL) {
priv->job.signals ^= QEMU_JOB_SIGNAL_CANCEL;
VIR_DEBUG("Cancelling job at client request");
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
- ret = qemuMonitorMigrateCancel(priv->mon);
- qemuDomainObjExitMonitorWithDriver(driver, vm);
+ ret = qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ if (ret == 0) {
+ ret = qemuMonitorMigrateCancel(priv->mon);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
+ }
if (ret < 0) {
VIR_WARN("Unable to cancel job");
}
priv->job.signals ^= QEMU_JOB_SIGNAL_MIGRATE_DOWNTIME;
priv->job.signalsData.migrateDowntime = 0;
VIR_DEBUG("Setting migration downtime to %llums", ms);
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
- ret = qemuMonitorSetMigrationDowntime(priv->mon, ms);
- qemuDomainObjExitMonitorWithDriver(driver, vm);
+ ret = qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ if (ret == 0) {
+ ret = qemuMonitorSetMigrationDowntime(priv->mon, ms);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
+ }
if (ret < 0)
VIR_WARN("Unable to set migration downtime");
} else if (priv->job.signals & QEMU_JOB_SIGNAL_MIGRATE_SPEED) {
priv->job.signals ^= QEMU_JOB_SIGNAL_MIGRATE_SPEED;
priv->job.signalsData.migrateBandwidth = 0;
VIR_DEBUG("Setting migration bandwidth to %luMbs", bandwidth);
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
- ret = qemuMonitorSetMigrationSpeed(priv->mon, bandwidth);
- qemuDomainObjExitMonitorWithDriver(driver, vm);
+ ret = qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ if (ret == 0) {
+ ret = qemuMonitorSetMigrationSpeed(priv->mon, bandwidth);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
+ }
if (ret < 0)
VIR_WARN("Unable to set migration speed");
} else if (priv->job.signals & QEMU_JOB_SIGNAL_BLKSTAT) {
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
- ret = qemuMonitorGetBlockStatsInfo(priv->mon,
- priv->job.signalsData.statDevName,
- &priv->job.signalsData.blockStat->rd_req,
- &priv->job.signalsData.blockStat->rd_bytes,
- &priv->job.signalsData.blockStat->wr_req,
- &priv->job.signalsData.blockStat->wr_bytes,
- &priv->job.signalsData.blockStat->errs);
- qemuDomainObjExitMonitorWithDriver(driver, vm);
+ ret = qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ if (ret == 0) {
+ ret = qemuMonitorGetBlockStatsInfo(priv->mon,
+ priv->job.signalsData.statDevName,
+ &priv->job.signalsData.blockStat->rd_req,
+ &priv->job.signalsData.blockStat->rd_bytes,
+ &priv->job.signalsData.blockStat->wr_req,
+ &priv->job.signalsData.blockStat->wr_bytes,
+ &priv->job.signalsData.blockStat->errs);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
+ }
*priv->job.signalsData.statRetCode = ret;
priv->job.signals ^= QEMU_JOB_SIGNAL_BLKSTAT;
if (ret < 0)
VIR_WARN("Unable to get block statistics");
} else if (priv->job.signals & QEMU_JOB_SIGNAL_BLKINFO) {
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
- ret = qemuMonitorGetBlockExtent(priv->mon,
- priv->job.signalsData.infoDevName,
- &priv->job.signalsData.blockInfo->allocation);
- qemuDomainObjExitMonitorWithDriver(driver, vm);
+ ret = qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ if (ret == 0) {
+ ret = qemuMonitorGetBlockExtent(priv->mon,
+ priv->job.signalsData.infoDevName,
+ &priv->job.signalsData.blockInfo->allocation);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
+ }
*priv->job.signalsData.infoRetCode = ret;
priv->job.signals ^= QEMU_JOB_SIGNAL_BLKINFO;
return -1;
}
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
- ret = qemuMonitorGetMigrationStatus(priv->mon,
- &status,
- &memProcessed,
- &memRemaining,
- &memTotal);
- qemuDomainObjExitMonitorWithDriver(driver, vm);
+ ret = qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ if (ret == 0) {
+ ret = qemuMonitorGetMigrationStatus(priv->mon,
+ &status,
+ &memProcessed,
+ &memRemaining,
+ &memTotal);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
+ }
if (ret < 0 || virTimeMs(&priv->job.info.timeElapsed) < 0) {
priv->job.info.type = VIR_DOMAIN_JOB_FAILED;
qemuDomainObjPrivatePtr priv = vm->privateData;
const char *job;
- switch (priv->job.active) {
- case QEMU_JOB_MIGRATION_OUT:
+ switch (priv->job.asyncJob) {
+ case QEMU_ASYNC_JOB_MIGRATION_OUT:
job = _("migration job");
break;
- case QEMU_JOB_SAVE:
+ case QEMU_ASYNC_JOB_SAVE:
job = _("domain save job");
break;
- case QEMU_JOB_DUMP:
+ case QEMU_ASYNC_JOB_DUMP:
job = _("domain core dump job");
break;
default:
if (cookie->graphics->type != VIR_DOMAIN_GRAPHICS_TYPE_SPICE)
return 0;
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
- ret = qemuMonitorGraphicsRelocate(priv->mon,
- cookie->graphics->type,
- cookie->remoteHostname,
- cookie->graphics->port,
- cookie->graphics->tlsPort,
- cookie->graphics->tlsSubject);
- qemuDomainObjExitMonitorWithDriver(driver, vm);
+ ret = qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ if (ret == 0) {
+ ret = qemuMonitorGraphicsRelocate(priv->mon,
+ cookie->graphics->type,
+ cookie->remoteHostname,
+ cookie->graphics->port,
+ cookie->graphics->tlsPort,
+ cookie->graphics->tlsSubject);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
+ }
return ret;
}
QEMU_MIGRATION_COOKIE_LOCKSTATE)))
goto cleanup;
- if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
+ if (qemuDomainObjBeginAsyncJobWithDriver(driver, vm,
+ QEMU_ASYNC_JOB_MIGRATION_IN) < 0)
goto cleanup;
- qemuDomainObjSetJob(vm, QEMU_JOB_MIGRATION_IN);
/* Domain starts inactive, even if the domain XML had an id field. */
vm->def->id = -1;
virDomainAuditStart(vm, "migrated", false);
qemuProcessStop(driver, vm, 0, VIR_DOMAIN_SHUTOFF_FAILED);
if (!vm->persistent) {
- if (qemuDomainObjEndJob(vm) > 0)
+ if (qemuDomainObjEndAsyncJob(vm) > 0)
virDomainRemoveInactive(&driver->domains, vm);
vm = NULL;
}
endjob:
if (vm &&
- qemuDomainObjEndJob(vm) == 0)
+ qemuDomainObjEndAsyncJob(vm) == 0)
vm = NULL;
/* We set a fake job active which is held across
*/
if (vm &&
virDomainObjIsActive(vm)) {
- qemuDomainObjSetJob(vm, QEMU_JOB_MIGRATION_IN);
+ priv->job.asyncJob = QEMU_ASYNC_JOB_MIGRATION_IN;
priv->job.info.type = VIR_DOMAIN_JOB_UNBOUNDED;
priv->job.start = now;
}
QEMU_MIGRATION_COOKIE_LOCKSTATE)))
goto cleanup;
- if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
+ if (qemuDomainObjBeginAsyncJobWithDriver(driver, vm,
+ QEMU_ASYNC_JOB_MIGRATION_IN) < 0)
goto cleanup;
- qemuDomainObjSetJob(vm, QEMU_JOB_MIGRATION_IN);
/* Domain starts inactive, even if the domain XML had an id field. */
vm->def->id = -1;
* should have already done that.
*/
if (!vm->persistent) {
- if (qemuDomainObjEndJob(vm) > 0)
+ if (qemuDomainObjEndAsyncJob(vm) > 0)
virDomainRemoveInactive(&driver->domains, vm);
vm = NULL;
}
endjob:
if (vm &&
- qemuDomainObjEndJob(vm) == 0)
+ qemuDomainObjEndAsyncJob(vm) == 0)
vm = NULL;
/* We set a fake job active which is held across
*/
if (vm &&
virDomainObjIsActive(vm)) {
- qemuDomainObjSetJob(vm, QEMU_JOB_MIGRATION_IN);
+ priv->job.asyncJob = QEMU_ASYNC_JOB_MIGRATION_IN;
priv->job.info.type = VIR_DOMAIN_JOB_UNBOUNDED;
priv->job.start = now;
}
goto cleanup;
}
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ if (qemuDomainObjEnterMonitorWithDriver(driver, vm) < 0)
+ goto cleanup;
+
if (resource > 0 &&
qemuMonitorSetMigrationSpeed(priv->mon, resource) < 0) {
qemuDomainObjExitMonitorWithDriver(driver, vm);
goto cleanup;
}
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ if (qemuDomainObjEnterMonitorWithDriver(driver, vm) < 0)
+ goto cleanup;
+
if (resource > 0 &&
qemuMonitorSetMigrationSpeed(priv->mon, resource) < 0) {
qemuDomainObjExitMonitorWithDriver(driver, vm);
/* it is also possible that the migrate didn't fail initially, but
* rather failed later on. Check the output of "info migrate"
*/
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ if (qemuDomainObjEnterMonitorWithDriver(driver, vm) < 0)
+ goto cancel;
if (qemuMonitorGetMigrationStatus(priv->mon,
&status,
&transferred,
if (ret != 0 && virDomainObjIsActive(vm)) {
VIR_FORCE_CLOSE(client_sock);
VIR_FORCE_CLOSE(qemu_sock);
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
- qemuMonitorMigrateCancel(priv->mon);
- qemuDomainObjExitMonitorWithDriver(driver, vm);
+ if (qemuDomainObjEnterMonitorWithDriver(driver, vm) == 0) {
+ qemuMonitorMigrateCancel(priv->mon);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
+ }
}
cleanup:
cookieout, cookieoutlen, flags, NULLSTR(dname),
resource, v3proto);
- if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
+ if (qemuDomainObjBeginAsyncJobWithDriver(driver, vm,
+ QEMU_ASYNC_JOB_MIGRATION_OUT) < 0)
goto cleanup;
- qemuDomainObjSetJob(vm, QEMU_JOB_MIGRATION_OUT);
if (!virDomainObjIsActive(vm)) {
qemuReportError(VIR_ERR_OPERATION_INVALID,
VIR_DOMAIN_EVENT_RESUMED_MIGRATED);
}
if (vm) {
- if (qemuDomainObjEndJob(vm) == 0) {
+ if (qemuDomainObjEndAsyncJob(vm) == 0) {
vm = NULL;
} else if (!virDomainObjIsActive(vm) &&
(!vm->persistent || (flags & VIR_MIGRATE_UNDEFINE_SOURCE))) {
virErrorPtr orig_err = NULL;
priv = vm->privateData;
- if (priv->job.active != QEMU_JOB_MIGRATION_IN) {
+ if (priv->job.asyncJob != QEMU_ASYNC_JOB_MIGRATION_IN) {
qemuReportError(VIR_ERR_NO_DOMAIN,
_("domain '%s' is not processing incoming migration"), vm->def->name);
goto cleanup;
}
- qemuDomainObjDiscardJob(vm);
+ qemuDomainObjDiscardAsyncJob(vm);
if (!(mig = qemuMigrationEatCookie(driver, vm, cookiein, cookieinlen, 0)))
goto cleanup;
- if (qemuDomainObjBeginJobWithDriver(driver, vm) < 0)
+ if (qemuDomainObjBeginJobWithDriver(driver, vm, QEMU_JOB_MODIFY) < 0)
goto cleanup;
/* Did the migration go as planned? If yes, return the domain
restoreLabel = true;
}
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ if (qemuDomainObjEnterMonitorWithDriver(driver, vm) < 0)
+ goto cleanup;
+
if (!compressor) {
const char *args[] = { "cat", NULL };
VIR_DEBUG("vm=%p", vm);
qemuDriverLock(driver);
virDomainObjLock(vm);
- if (qemuDomainObjBeginJob(vm) < 0)
+ if (qemuDomainObjBeginJob(vm, QEMU_JOB_MODIFY) < 0)
goto cleanup;
if (!virDomainObjIsActive(vm)) {
goto endjob;
}
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
if (qemuMonitorSystemReset(priv->mon) < 0) {
qemuDomainObjExitMonitorWithDriver(driver, vm);
goto endjob;
}
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
ret = qemuMonitorSetCapabilities(priv->mon);
qemuDomainObjExitMonitorWithDriver(driver, vm);
goto cleanup;
priv = vm->privateData;
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
ret = qemuMonitorGetPtyPaths(priv->mon, paths);
qemuDomainObjExitMonitorWithDriver(driver, vm);
/* What follows is now all KVM specific */
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
if ((ncpupids = qemuMonitorGetCPUInfo(priv->mon, &cpupids)) < 0) {
qemuDomainObjExitMonitorWithDriver(driver, vm);
return -1;
goto cleanup;
alias = vm->def->disks[i]->info.alias;
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
ret = qemuMonitorSetDrivePassphrase(priv->mon, alias, secret);
VIR_FREE(secret);
qemuDomainObjExitMonitorWithDriver(driver, vm);
int ret;
qemuMonitorPCIAddress *addrs = NULL;
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
naddrs = qemuMonitorGetAllPCIAddresses(priv->mon,
&addrs);
qemuDomainObjExitMonitorWithDriver(driver, vm);
}
VIR_FREE(priv->lockState);
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
ret = qemuMonitorStartCPUs(priv->mon, conn);
qemuDomainObjExitMonitorWithDriver(driver, vm);
oldState = virDomainObjGetState(vm, &oldReason);
virDomainObjSetState(vm, VIR_DOMAIN_PAUSED, reason);
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
- ret = qemuMonitorStopCPUs(priv->mon);
- qemuDomainObjExitMonitorWithDriver(driver, vm);
+ ret = qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ if (ret == 0) {
+ ret = qemuMonitorStopCPUs(priv->mon);
+ qemuDomainObjExitMonitorWithDriver(driver, vm);
+ }
if (ret == 0) {
if (virDomainLockProcessPause(driver->lockManager, vm, &priv->lockState) < 0)
bool running;
int ret;
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
ret = qemuMonitorGetStatus(priv->mon, &running);
qemuDomainObjExitMonitorWithDriver(driver, vm);
priv = obj->privateData;
+ /* Set fake job so that EnterMonitor* doesn't want to start a new one */
+ priv->job.active = QEMU_JOB_MODIFY;
+
/* Hold an extra reference because we can't allow 'vm' to be
* deleted if qemuConnectMonitor() failed */
virDomainObjRef(obj);
if (qemuProcessFiltersInstantiate(conn, obj->def))
goto error;
+ priv->job.active = QEMU_JOB_NONE;
+
/* update domain state XML with possibly updated state in virDomainObj */
if (virDomainSaveStatus(driver->caps, driver->stateDir, obj) < 0)
goto error;
VIR_DEBUG("Setting initial memory amount");
cur_balloon = vm->def->mem.cur_balloon;
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
if (qemuMonitorSetBalloon(priv->mon, cur_balloon) < 0) {
qemuDomainObjExitMonitorWithDriver(driver, vm);
goto cleanup;
}
VIR_DEBUG("Getting initial memory amount");
- qemuDomainObjEnterMonitorWithDriver(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorWithDriver(driver, vm));
if (qemuMonitorGetBalloonInfo(priv->mon, &vm->def->mem.cur_balloon) < 0) {
qemuDomainObjExitMonitorWithDriver(driver, vm);
goto cleanup;
}
priv = dom->privateData;
- if (priv->job.active == QEMU_JOB_MIGRATION_IN) {
- VIR_DEBUG("vm=%s has incoming migration active, cancelling",
+ if (priv->job.asyncJob) {
+ VIR_DEBUG("vm=%s has long-term job active, cancelling",
dom->def->name);
- qemuDomainObjDiscardJob(dom);
+ qemuDomainObjDiscardAsyncJob(dom);
}
- if (qemuDomainObjBeginJobWithDriver(data->driver, dom) < 0)
+ if (qemuDomainObjBeginJobWithDriver(data->driver, dom,
+ QEMU_JOB_DESTROY) < 0)
goto cleanup;
VIR_DEBUG("Killing domain");