\end{itemize}
}
+\section{Xen}
+
+\begin{frame}
+ \frametitle{The full virtualization spectrum}
+ {\centering
+ \begin{columns}
+ \begin{column}{0.4\textwidth}
+ {\scriptsize
+ \begin{table}
+ \centering
+ \tabulinesep=1.2mm
+ \begin{tabu} {| c | l}
+ \cline{1-1}
+ VS & Software virtualization \\
+ \cline{1-1}
+ VH & Hardware virtualization \\
+ \cline{1-1}
+ PV & Paravirtualized \\
+ \cline{1-1}
+ \end{tabu}
+ \end{table}}
+ \end{column}
+ \begin{column}{0.4\textwidth}
+ {\scriptsize
+ \begin{table}
+ \centering
+ \tabulinesep=1.2mm
+ \begin{tabu} {| c | l}
+ \hhline{-~}
+ \cellcolor{red!80} & Poor performance \\
+ \hhline{-~}
+ \cellcolor{yellow!80} & Room for improvement \\
+ \hhline{-~}
+ \cellcolor{green!80} & Optimal performance \\
+ \hhline{-~}
+ \end{tabu}
+ \end{table}}
+ \end{column}
+ \end{columns}}
+ \vspace{-2em}
+ {\scriptsize
+ \begin{table}
+ \centering
+ \tabulinesep=1.2mm
+ \begin{tabu} {| l | c | c | c | c | c |}
+ \multicolumn{1}{r}{}
+ & \multicolumn{1}{c}{\rot{Disk and network}}
+ & \multicolumn{1}{c}{\rot{Interrupts and timers}}
+ & \multicolumn{1}{c}{\rot{Emulated motherboard}}
+ & \multicolumn{1}{c}{\rot{\specialcell{Privileged instructions\\and page tables}}}
+ \\
+ \hline
+ HVM &
+ \cellcolor{red!80}VS &
+ \cellcolor{red!80}VS &
+ \cellcolor{yellow!80}VS &
+ \cellcolor{green!80}VH \\
+ \hline
+ HVM with PV drivers &
+ \cellcolor{green!80}PV &
+ \cellcolor{red!80}VS &
+ \cellcolor{yellow!80}VS &
+ \cellcolor{green!80}VH \\
+ \hline
+ PVHVM &
+ \cellcolor{green!80}PV &
+ \cellcolor{green!80}PV &
+ \cellcolor{yellow!80}VS &
+ \cellcolor{green!80}VH \\
+ \hline
+ PV &
+ \cellcolor{green!80}PV &
+ \cellcolor{green!80}PV &
+ \cellcolor{green!80}PV &
+ \cellcolor{yellow!80}PV \\
+ \hline
+ PVH &
+ \cellcolor{green!80}PV &
+ \cellcolor{green!80}PV &
+ \cellcolor{green!80}PV &
+ \cellcolor{green!80}VH \\
+ \hline
+ \end{tabu}
+ \end{table}}
+\end{frame}
+\note{
+ \begin{itemize}
+ \item PV and HVM at both ends
+ \item To provide better IO performance in HVM guests, two new modes have been added
+ \item HVM with PV drivers is a fully virtualized domain with PV drivers for both disk and network devices, providing better performance (example: Wind
+ows, FreeBSD)
+ \item PVHVM is another step in the HVM with PV drivers, providing also PV drivers for interrupts and timers (example: Linux)
+ \item This improvements provide better performance, but we still need to run Qemu in the host to emulate a motherboard and the boot sequence.
+ \end{itemize}
+}
+
+\begin{frame}
+ \frametitle{Why PVH?}
+ \begin{itemize}
+ \item Performance: use hardware feature as much as possible
+ \item Security
+ \begin{itemize}
+ \item No emulation eliminate a main class of security bugs
+ \item No PVMMU etc, a lot less complex code for both guest kernel and Xen toolstack
+ \end{itemize}
+ \item Maintanence
+ \begin{itemize}
+ \item No PVMMU etc, a lot less code
+ \end{itemize}
+ \end{itemize}
+\end{frame}
+\note{
+ \begin{itemize}
+ \end{itemize}
+}
+
+
+\begin{frame}
+ \frametitle{Gory details about PVH}
+ \begin{itemize}
+ \item PVH-classic vs HVMlite-nodm
+ \item PVH-classic is first attempt for the design, to make PV guest look like HVM guest
+ \item HVMlite-nodm is the new approach, to make HVM guest look like PV guest
+ \item They will converge at some point, the agreed upon road map is to make HVMlite-nodm canonical "PVH"
+ \item End users probably won't notice the difference
+ \end{itemize}
+\end{frame}
+\note{
+ \begin{itemize}
+ \end{itemize}
+}
+
+\begin{frame}
+ \frametitle{Guest support}
+
+ \begin{itemize}
+ \item List of OSes and virtualization support:
+ \end{itemize}
+
+ {\scriptsize
+ \begin{table}
+ \centering
+ \tabulinesep=1.2mm
+ \begin{tabu} {| l | c | c | c | c | c |}
+ \cline{2-6}
+ \multicolumn{1}{c|}{}
+ & PV
+ & PVHVM
+ & HVM with PV drivers
+ & HVM
+ & PVH*
+ \\
+ \hline
+ Linux &
+ \cellcolor{green!80}YES &
+ \cellcolor{green!80}YES &
+ \cellcolor{green!80}YES &
+ \cellcolor{green!80}YES &
+ \cellcolor{green!80}YES \\
+ \hline
+ Windows &
+ \cellcolor{red!80}NO &
+ \cellcolor{red!80}NO &
+ \cellcolor{green!80}YES &
+ \cellcolor{green!80}YES &
+ \cellcolor{red!80}NO \\
+ \hline
+ NetBSD &
+ \cellcolor{green!80}YES &
+ \cellcolor{red!80}NO &
+ \cellcolor{red!80}NO &
+ \cellcolor{green!80}YES &
+ \cellcolor{red!80}NO \\
+ \hline
+ FreeBSD &
+ \cellcolor{red!80}NO &
+ \cellcolor{green!80}YES &
+ \cellcolor{green!80}YES &
+ \cellcolor{green!80}YES &
+ \cellcolor{green!80}YES \\
+ \hline
+ OpenBSD &
+ \cellcolor{red!80}NO &
+ \cellcolor{yellow!80}YES &
+ \cellcolor{yellow!80}YES &
+ \cellcolor{green!80}YES &
+ \cellcolor{red!80}NO \\
+ \hline
+ DragonflyBSD &
+ \cellcolor{red!80}NO &
+ \cellcolor{red!80}NO &
+ \cellcolor{red!80}NO &
+ \cellcolor{green!80}YES &
+ \cellcolor{red!80}NO \\
+ \hline
+ \end{tabu}
+ \end{table}}
+
+\end{frame}
+\note {
+ \begin{itemize}
+ \item Linux and NetBSD are the only OSes to support PV.
+ \item Windows performance can be improved by installing PV drivers for network and disk devices.
+ \end{itemize}
+}
+
+\begin{frame}
+ \frametitle{Better scalability}
+ \begin{itemize}
+ \item Finer grained locks in hypervisor: per-vcpu maptrack free lists, per-cpu rwlock
+ \item Fairer locks in hypervisor: queue rwlock
+ \item Should benefit all guests, especially Xen virtual devices with multiqueue support (net, block)
+ \begin{itemize}
+ \item 2-socket Haswell-EP systems, Linux 16 queues inter-VM network throughput jumped from 15Gb/s to 48Gb/s
+ \end{itemize}
+ \end{itemize}
+\end{frame}
+\note{
+ \begin{itemize}
+
+ eede22972fefa02100226252c430ffcca99025eb
+
+ On multi-socket systems, the contention results in the locked compare swap
+ operation failing frequently which results in a tight loop of retries of the
+ compare swap operation. As the coherency fabric can only support a specific
+ rate of compare swap operations for a particular data location then taking
+ the read lock itself becomes a bottleneck for grant operations.
+
+ Standard rwlock performance of a single VIF VM-VM transfer with 16 queues
+ configured was limited to approximately 15 gbit/s on a 2 socket Haswell-EP
+ host.
+
+ Percpu rwlock performance with the same configuration is approximately
+ 48 gbit/s.
+
+
+ \end{itemize}
+}
+
+
+\begin{frame}
+ \frametitle{Virtual Performance Monitoring Unit}
+ \begin{itemize}
+ \item Fully implemented in Xen 4.6, works for both PV and HVM
+ \item Intended for non-production use
+ \item Use dtrace or pmcstat to profile your VM
+ \end{itemize}
+ \begin{center}
+ \includegraphics[width=0.66\textwidth]{netfront-flame-graph.pdf}
+ \end{center}
+\end{frame}
+\note{
+ \begin{itemize}
+ \item intended for non-production use, implication is that there is security-wise support from upstream
+ \end{itemize}
+}
+
+\begin{frame}
+ \frametitle{xSplice - hypervisor hot-patching}
+ \begin{itemize}
+ \item ABC
+ \end{itemize}
+\end{frame}
+\note{
+ \begin{itemize}
+ \end{itemize}
+}
+
+
\section{Live demo}
\subsection{live}