Hypervisor


A hypervisor, also known as a virtual machine monitor, is a type of computer software, firmware or hardware that creates and runs virtual machines. A computer on which a hypervisor runs one or more virtual machines is called a host machine or virtualization server, and each virtual machine is called a guest machine. The hypervisor presents the guest operating systems with a virtual operating platform and manages the execution of the guest operating systems. Unlike an emulator, the guest executes most instructions on the native hardware. Multiple instances of a variety of operating systems may share the virtualized hardware resources: for example, Linux, Windows, and macOS instances can all run on a single physical x86 machine. This contrasts with operating-system–level virtualization, where all instances must share a single kernel, though the guest operating systems can differ in user space, such as different Linux distributions with the same kernel.
The term hypervisor is a variant of supervisor, a traditional term for the kernel of an operating system: the hypervisor is the supervisor of the supervisors, with hyper- used as a stronger variant of super-. The term dates to circa 1970; IBM coined it for software that ran OS/360 and the 7090 emulator concurrently on the 360/65 and later used it for the DIAG handler of CP-67. In the earlier CP/CMS system, the term Control Program was used instead.
Some literature, especially in microkernel contexts, makes a distinction between hypervisor and virtual machine monitor. There, both components form the overall virtualization stack of a certain system. Hypervisor refers to kernel-space functionality and VMM to user-space functionality. Specifically in these contexts, a hypervisor is a microkernel implementing virtualization infrastructure that must run in kernel-space for technical reasons, such as Intel VMX. Microkernels implementing virtualization mechanisms are also referred to as microhypervisor. Applying this terminology to Linux, KVM is a hypervisor and QEMU or Cloud Hypervisor are VMMs utilizing KVM as hypervisor.

Classification

In his 1973 thesis Architectural Principles for Virtual Computer Systems, Robert P. Goldberg classified two types of hypervisor:
; Type-1, native or bare-metal hypervisors
; Type-2 or hosted hypervisors
The distinction between these two types is not always clear. For instance, KVM and bhyve are kernel modules that effectively convert the host operating system to a type-1 hypervisor.

Mainframe origins

The first hypervisors providing full virtualization were the test tool SIMMON and the one-off IBM CP-40 research system, which began production use in January 1967 and became the first version of the IBM CP/CMS operating system. CP-40 ran on a S/360-40 modified at the Cambridge Scientific Center to support dynamic address translation, a feature that enabled virtualization. Prior to this time, computer hardware had only been virtualized to the extent to allow multiple user applications to run concurrently, such as in CTSS and IBM M44/44X. With CP-40, the hardware's supervisor state was virtualized as well, allowing multiple operating systems to run concurrently in separate virtual machine contexts.
Programmers soon implemented CP-40 for the IBM System/360-67, the first production computer system capable of full virtualization. IBM shipped this machine in 1966; it included page-translation-table hardware for virtual memory and other techniques that allowed a full virtualization of all kernel tasks, including I/O and interrupt handling. Both CP-40 and CP-67 began production use in 1967. CP/CMS was available to IBM customers from 1968 to early 1970s, in source code form without support.
CP/CMS formed part of IBM's attempt to build robust time-sharing systems for its mainframe computers. By running multiple operating systems concurrently, the hypervisor increased system robustness and stability: Even if one operating system crashed, the others would continue working without interruption. Indeed, this even allowed beta or experimental versions of operating systemsor even of new hardwareto be deployed and debugged, without jeopardizing the stable main production system, and without requiring costly additional development systems.
IBM announced its System/370 series in 1970 without the virtual memory feature needed for virtualization, but added it in the August 1972 Advanced Function announcement. Virtualization has been featured in all successor systems, such that all modern-day IBM mainframes, including the zSeries line, retain backward compatibility with the 1960s-era IBM S/360 line. The 1972 announcement also included VM/370, a reimplementation of CP/CMS for the S/370. Unlike CP/CMS, IBM provided support for this version. VM stands for Virtual Machine, emphasizing that all, not just some, of the hardware interfaces are virtualized. Both VM and CP/CMS enjoyed early acceptance and rapid development by universities, corporate users, and time-sharing vendors, as well as within IBM. Users played an active role in ongoing development, anticipating trends seen in modern open source projects. However, in a series of disputed and bitter battles, time-sharing lost out to batch processing through IBM political infighting, and VM remained IBM's "other" mainframe operating system for decades, losing to MVS. It enjoyed a resurgence of popularity and support from 2000 as the z/VM product, for example as the platform for Linux on IBM Z.
As mentioned above, the VM control program includes a hypervisor-call handler that intercepts DIAG instructions used within a virtual machine. This provides fast-path non-virtualized execution of file-system access and other operations. When first implemented in CP/CMS release 3.1, this use of DIAG provided an operating system interface that was analogous to the System/360 Supervisor Call instruction, but that did not require altering or extending the system's virtualization of SVC.
In 1985 IBM introduced the PR/SM hypervisor to manage logical partitions.

Operating system support

Several factors led to a resurgence around 2005 in the use of virtualization technology among Unix, Linux, and other Unix-like operating systems:
  • Expanding hardware capabilities, allowing each single machine to do more simultaneous work
  • Efforts to control costs and to simplify management through consolidation of servers
  • The need to control large multiprocessor and cluster installations, for example in server farms and render farms
  • The improved security, reliability, and device independence possible from hypervisor architectures
  • The ability to run complex, OS-dependent applications in different hardware or OS environments
  • The ability to overprovision resources, fitting more applications onto a host
Major Unix vendors, including HP, IBM, SGI, and Sun Microsystems, have been selling virtualized hardware since before 2000. These have generally been large, expensive systems, although virtualization has also been available on some low- and mid-range systems, such as IBM pSeries servers, HP Superdome series machines, and Sun/Oracle SPARC T series CoolThreads servers.
IBM provides virtualization partition technology known as logical partitioning on System/390, zSeries, pSeries and IBM AS/400 systems. For IBM's Power Systems, the POWER Hypervisor is a native hypervisor in firmware and provides isolation between LPARs. Processor capacity is provided to LPARs in either a dedicated fashion or on an entitlement basis where unused capacity is harvested and can be re-allocated to busy workloads. Groups of LPARs can have their processor capacity managed as if they were in a "pool" - IBM refers to this capability as Multiple Shared-Processor Pools and implements it in servers with the POWER6 processor. LPAR and MSPP capacity allocations can be dynamically changed. Memory is allocated to each LPAR and is address-controlled by the POWER Hypervisor. For real-mode addressing by operating systems, the Power processors have designed virtualization capabilities where a hardware address-offset is evaluated with the OS address-offset to arrive at the physical memory address. Input/Output adapters can be exclusively "owned" by LPARs or shared by LPARs through an appliance partition known as the Virtual I/O Server. The Power Hypervisor provides for high levels of reliability, availability and serviceability by facilitating hot add/replace of multiple parts
HPE provides HP Integrity Virtual Machines to host multiple operating systems on their Itanium powered Integrity systems. Itanium can run HP-UX, Linux, Windows and OpenVMS, and these environments are also supported as virtual servers on HP's Integrity VM platform. The HP-UX operating system hosts the Integrity VM hypervisor layer that allows for multiple features of HP-UX to be taken advantage of and provides major differentiation between this platform and other commodity platforms - such as processor hotswap, memory hotswap, and dynamic kernel updates without system reboot. While it heavily leverages HP-UX, the Integrity VM hypervisor is really a hybrid that runs on bare-metal while guests are executing. Running normal HP-UX applications on an Integrity VM host is heavily discouraged, because Integrity VM implements its own memory management, scheduling and I/O policies that are tuned for virtual machines and are not as effective for normal applications. HPE also provides more rigid partitioning of their Integrity and HP9000 systems by way of VPAR and nPar technology, the former offering shared resource partitioning and the latter offering complete I/O and processing isolation. The flexibility of virtual server environment has given way to its use more frequently in newer deployments.
Although Solaris has always been the only guest domain OS officially supported by Sun/Oracle on their Logical Domains hypervisor,, Linux, and FreeBSD have been ported to run on top of the hypervisor. Wind River "Carrier Grade Linux" also runs on Sun's Hypervisor. Full virtualization on SPARC processors proved straightforward: since its inception in the mid-1980s Sun deliberately kept the SPARC architecture clean of artifacts that would have impeded virtualization.
Similar trends have occurred with x86/x86-64 server platforms, where open-source projects such as Xen have led virtualization efforts. These include hypervisors built on Linux and Solaris kernels as well as custom kernels. Since these technologies span from large systems down to desktops, they are described in the next section.