- Mainframe Origins
- Types of Virtualization
- Which ones I use?
Computers today have a lot of power (CPU , RAM , Storage, GPU)
But is this power being used efficiently?
The answer is no. Computers/Servers(in a sense of 1 server per application or task) are underutilised and the electricity it takes up is wasted
Virtualization helps to solve this problem by creating a virtualisation layer betwen the hardware components and the user
This enables the creation of virtual machines which are virtual computers that can run multiple on the same computer/server each of them thinking they are running on their own dedicated hardware/machine/computer
Hypervisor: Host and Guest Machines
A hypervisor (or virtual machine monitor, VMM) is a computer software, firmware or hardware that creates and runs virtual machines. A computer on which a hypervisor runs one or more virtual machines is called a host machine, and each virtual machine is called a guest machine. The hypervisor presents the guest operating systems with a virtual operating platform and manages the execution of the guest operating systems. Multiple instances of a variety of operating systems may share the virtualized hardware resources: for example, Linux, Windows, and macOS instances can all run on a single physical x86 machine. This contrasts with operating-system-level virtualization, where all instances (usually called containers) must share a single kernel, though the guest operating systems can differ in user space, such as different Linux distributions with the same kernel.
The term hypervisor is a variant of supervisor, a traditional term for the kernel of an operating system: the hypervisor is the supervisor of the supervisor, with hyper- used as a stronger variant of super-.[a] The term dates to circa 1970; in the earlier CP/CMS (1967) system the term Control Program was used instead.
CP/CMS is the predecessor of IBM z/VM
Type I or native or bare-metal Hypervisors:
These hypervisors run directly on the host’s hardware to control the hardware and to manage guest operating systems. For this reason, they are sometimes called bare metal hypervisors. The first hypervisors, which IBM developed in the 1960s, were native hypervisors.These included the test software SIMMON and the CP/CMS operating system (the predecessor of IBM’s z/VM). Modern equivalents include Xen,Oracle VM Server for SPARC, Oracle VM Server for x86, Microsoft Hyper-V and VMware ESXi (formerly ESX).
Type II or hosted Hypervisors:
These hypervisors run on a conventional operating system (OS) just as other computer programs do. A guest operating system runs as a process on the host. Type-2 hypervisors abstract guest operating systems from the host operating system. VMware Workstation, VMware Player, VirtualBox, Parallels Desktop for Mac and QEMU are examples of type-2 hypervisors.
The distinction between these two types is not always clear. For instance, Linux’s Kernel-based Virtual Machine (KVM) and FreeBSD‘s bhyve are kernel modules that effectively convert the host operating system to a type-1 hypervisor.At the same time, since Linux distributions and FreeBSD are still general-purpose operating systems, with applications competing with each other for VM resources, KVM and bhyve can also be categorized as type-2 hypervisors.
The first hypervisors providing full virtualization were the test tool SIMMON and IBM’s one-off research CP-40 system, which began production use in January 1967, and became the first version of IBM’s CP/CMS operating system. CP-40 ran on a S/360-40 that was modified at the IBM Cambridge Scientific Center to support dynamic address translation, a feature that enabled virtualization. Prior to this time, computer hardware had only been virtualized to the extent to allow multiple user applications to run concurrently, such as in CTSS and IBM M44/44X. With CP-40, the hardware’s supervisor state was virtualized as well, allowing multiple operating systems to run concurrently in separate virtual machine contexts.
Programmers soon implemented CP-40 (as CP-67) for the IBM System/360-67, the first production computer system capable of full virtualization. IBM first shipped this machine in 1966; it included page-translation-table hardware for virtual memory, and other techniques that allowed a full virtualization of all kernel tasks, including I/O and interrupt handling. (Note that its “official” operating system, the ill-fated TSS/360, did not employ full virtualization.) Both CP-40 and CP-67 began production use in 1967. CP/CMS was available to IBM customers from 1968 to early 1970s, in source code form without support.
CP/CMS formed part of IBM’s attempt to build robust time-sharing systems for its mainframe computers. By running multiple operating systems concurrently, the hypervisor increased system robustness and stability: Even if one operating system crashed, the others would continue working without interruption. Indeed, this even allowed beta or experimental versions of operating systems—or even of new hardware—to be deployed and debugged, without jeopardizing the stable main production system, and without requiring costly additional development systems.
Virtualization has been featured in all successor systems (all modern-day IBM mainframes, such as the zSeries line, retain backward compatibility with the 1960s-era IBM S/360 line). The 1972 announcement also included VM/370, a reimplementation of CP/CMS for the S/370. Unlike CP/CMS, IBM provided support for this version (though it was still distributed in source code form for several releases). VM stands for Virtual Machine, emphasizing that all, and not just some, of the hardware interfaces are virtualized. Both VM and CP/CMS enjoyed early acceptance and rapid development by universities, corporate users, and time-sharing vendors, as well as within IBM. Users played an active role in ongoing development, anticipating trends seen in modern open source projects. However, in a series of disputed and bitter battles, time-sharing lost out to batch processing through IBM political infighting, and VM remained IBM’s “other” mainframe operating system for decades, losing to MVS. It enjoyed a resurgence of popularity and support from 2000 as the z/VM product, for example as the platform for Linux on IBM Z.
As mentioned above, the VM control program includes a hypervisor-call handler that intercepts DIAG (“Diagnose”, opcode x’83’) instructions used within a virtual machine. This provides fast-path non-virtualized execution of file-system access and other operations (DIAG is a model-dependent privileged instruction, not used in normal programming, and thus is not virtualized. It is therefore available for use as a signal to the “host” operating system). When first implemented in CP/CMS release 3.1, this use of DIAG provided an operating system interface that was analogous to the System/360 Supervisor Call instruction (SVC), but that did not require altering or extending the system’s virtualization of SVC.
Types of Virtualization
- Full Virtualization
a., Full Software Virtualization – Software Assisted, BT Binary Translation
b., Full Software Virtualization – Hardware Assisted VT – Virtualization Technology
- Operating System Virtualization (Containerisation)
- Hybrid Virtualization (Hardware Virtualized with PV drivers)
Full Software Virtualization – Software Assisted – BT Binary Translation
- Virtualbox (32bit guests)
- VMWare Workstation (32bit guests)
- VMware Server (formerly VMWare GSX Server) Type II hypervisor installed on top of Linux or Windows
In full software virtualization, all the hardware is simulated by a software program. Each device driver and process in the guest OS “believes” it is running on actual hardware, even though the underlying hardware is really a software program. Software virtualization even fools the OS into thinking that it is running on hardware.One of the advantages of full software virtualization is that you can run any OS on it. It doesn’t matter if the OS in question understands the underlying host hardware or not. Thus, older OSs and specialty OSs can run in this environment. The architecture is very flexible because you don’t need a special understanding of the OS or hardware.The OS hardware subsystem discovers the hardware in the normal fashion. It believes the hardware is really hardware. The hardware types and features that it discovers are usually fairly generic and might not be as full-featured as actual hardware devices, though the system is functional.Another advantage of full software virtualization is that you don’t need to purchase any additional hardware. With hardware-assisted software virtualization, you need to purchase hardware that supports advanced VM technology. Although this technology is included in most systems available today, some older hardware does not have this capability. To use this older hardware as a virtual host, you must use either full software virtualization or paravirtualization.
|NOTEOnly hardware-assisted software virtualization requires advanced VM hardware features; full software virtualization does not. Oracle VM and VMware ESX work on older hardware that does not have any special CPU features. This type of virtualization is also known as emulation.|
Unfortunately, full software virtualization adds overhead. This overhead translates into extra instructions and CPU time on the host, resulting in a slower system and higher CPU usage. With full software virtualization, the CPU instruction calls are trapped by the Virtual Machine Monitor (VMM) and then emulated in a software program. Therefore, every hardware instruction that would normally be handled by the hardware itself is now handled by a program.For example, when the disk device driver makes an I/O call to the “virtual disk,” the software in the VM system intercepts it, then processes it, and finally makes an I/O to the real underlying disk. The number of instructions to perform an I/O is greatly increased.With networking, even more overhead is incurred since a network switch is simulated in the software. Depending on the amount of network activity, the overhead can be quite high. In fact, with severely overloaded host systems, you could possibly see network delays from the virtual switch itself. This is why sizing is so important.
Full Software Virtualization – Hardware Assisted VT – Virtualization Technology .. Also called Hardware Assisted Software Virtualization
Hardware-assisted software virtualization is available with CPU chips with built-in virtualization support. Recently, with the introduction of the Intel VT and AMD-V technology, this virtualization type has become commoditized. This technology was first introduced on the IBM System/370 computer. It is similar to software virtualization, with the exception that some hardware functions are accelerated and assisted by hardware technology. Similar to software virtualization, the hardware instructions are trapped and processed, but this time using hardware in the virtualization components of the CPU chip.By using hardware-assisted software virtualization, you get the benefits of software virtualization, such as the ability to use any OS without modifying it, and, at the same time, achieve better performance. Because of virtualization’s importance, significant effort is going into providing more support for hardware-assisted software virtualization. Hardware-assisted virtualization also supports any operating system.Using hardware-assisted software virtualization, Oracle VM lets you install and run Linux and Solaris x86-based OSs as well as Microsoft Windows. With other virtualization techniques, Oracle VM only allows Linux OSs. This technique also makes migrating from VMware systems to Oracle VM easier.As mentioned earlier, both Intel and AMD are committed to support for hardware-assisted software virtualization. They both introduced virtualization technology around the mid-2005–2006 period, which is not that long ago, and their support has improved the functionality and performance of virtualization. Intel and AMD do not yet fully support paravirtualization. Hardware-assisted software virtualization components are changing at a very fast pace, however, with new features and functionality being introduced continually.
|NOTEHardware-assisted virtualization is really the long-term virtualization solution. Applications such as Xen will be mainly used for management.|
Intel supports virtualization via its VT-x technology. The Intel VT-x technology is now part of many Intel chipsets, including the Pentium, Xeon, and Core processors families. The VT-x extensions support an Input/Output Memory Management Unit (IOMMU) that allows virtualized systems to access I/O devices directly. Ethernet and graphics devices can now have their DMA and interrupts directly mapped via the hardware. In the latest versions of the Intel VT technology, extended page tables have been added to allow direct translation from guest virtual addresses to physical addresses.
AMD supports virtualization via the AMD-V technology. The AMD-V technology includes a rapid virtualization indexing technology to accelerate virtualization. This technology is designed to assist with the virtual-to-physical translation of pages in a virtualized environment. Because this operation is one of the most common, by optimizing this function, performance is greatly enhanced. AMD virtualization products are available on both the Opteron and Athlon processor families.The virtual machine that uses the hardware-assisted software virtualization model has become known as the Hardware Virtual Machine or HVM. This terminology will be used throughout the rest of this chapter and refers to the fully software virtualized model with hardware assist.
Here is the list of enterprise software which supports hardware-assisted – Full virtualization which falls under hypervisor type 1 (Bare metal )
- VMware ESXi /ESX
- KVM ( Type II but behaves as a Type I Hypervisor)
The following virtualization type of virtualization falls under hypervisor type 2 (Hosted).
- VMware Workstation (64-bit guests only )
- Virtual Box (64-bit guests only )
- VMware Server (Retired )
End of Part I ( Episode 14) ….. Start of Part II ( Episode 15)
In paravirtualization, the guest OS is aware of and interfaces with the underlying host OS. A paravirtualized kernel in the guest understands the underlying host technology and takes advantage of that fact. Because the host OS is not faking the guest 100 percent, the amount of resources needed for virtualization is greatly reduced. In addition, paravirtualized device drivers for the guest can interface with the host system, reducing overhead. The idea behind paravirtualization is to reduce both the complexity and overhead involved in virtualization. By paravirtualizing both the host and guest operating system, very expensive functions are offloaded from the guest to the host OS.The guest essentially calls special system calls that then allow these functions to run within the host OS. When using a system such as Oracle VM, the host operating system acts in much the same way as a guest operating system. The hardware device drivers interface with a layer known as the hypervisor. The hypervisor, which is also known as the Virtual Machine Monitor (VMM), was mentioned earlier in this chapter. There are two types of hypervisor: The type 1 hypervisor runs directly on the host hardware; the type 2 or hosted hypervisor runs in software.
Examples of Paravirtualization: ( I added IBMs and HPs Integrity solutions as I think they belong here but correct me if I am wrong)
- IBM LPAR
- Oracle VM for SPARC (LDOM)
- Oracle VM for X86 (OVM)
- IBM PowerVM ( AIX)
- HP Integrity VSE
In Hardware assisted full virtualization, Guest operating systems are unmodified and it involves many VM traps and thus high CPU overheads which limit the scalability. Paravirtualization is a complex method where guest kernel needs to be modified to inject the API. By considering these issues, engineers have come with hybrid paravirtualization. It’s a combination of both Full & Paravirtualization. The virtual machine uses paravirtualization for specific hardware drivers (where there is a bottleneck with full virtualization, especially with I/O & memory intense workloads), and the host uses full virtualization for other features. The following products support hybrid virtualization.
- Oracle VM for x86
- VMware ESXi
The following diagram will help you to understand how VMware supports both full virtualization and hybrid virtualization. RDMA (Remote Direct Memory Access) uses the paravirual driver to bypass VMkernel in hardware-assisted full virtualization.
Operating System – OS Level Virtualization
Operating system-level virtualization is widely used.It also knowns “containerization”. Host Operating system kernel allows multiple user spaces aka instance.In OS-level virtualization, unlike other virtualization technologies, there will be very little or no overhead since its uses the host operating system kernel for execution. Oracle Solaris zone is one of the famous containers in the enterprise market. Here is the list of other containers.
- Linux Containers (LXC)
- AIX WPAR
- Solaris Zones
- FreeBSD Jails
Which ones I use?
In My Homelab
- VMWare ESXi 6.7 on My servers ( HP DL380 G6 and some others)
- Proxmox based on Debian on a 3 Server Cluster ( Small Lenovo M93p machines with 16GB Ram with some central vSAN or similar storage for them to use for storing VMs and being able to do Guests VM migrations, etc. (installed on as a type II but behaves as a type I hypervisor on top of Debian)
- VMWare Fusion and Workstation on my Mac and Linux machines ( even on a Windows laptop)
- Virtualbox in some special case to run some older OS which tends to run better in Virtualbox * OS/2 or Windows NT *
- QEMU for some special cases like HP-UX 11.11i 32bit and AIX 7.2 patched one or AIX 5.1
- KVM on my Fedora 31 Workstation Fujitsu R940
On My Production Cloud * Hetzner Dedicated Server*
- Proxmox based on Debian. With batch of Public IPv4-s and IPv6 for the VMs to be reachable when and if needed.
- I have a VM running on Proxmox which acts as a Docker host and thats where i do things with Docker and Containerization (( I do want to have one day a small Kubernetes cluster maybe from Raspberry Pi s or perhaps installed inside my Proxmox or ESXi in my Homelab to tinkle with
Proxmox Virtual Environment (Proxmox VE; short PVE) is an open-source server virtualization environment. It is a Debian-based Linux distribution with a modified Ubuntu LTS kernel and allows deployment and management of virtual machines and containers. Proxmox VE includes a Web console and command-line tools, and provides a REST API for third-party tools. Two types of virtualization are supported: container-based with LXC (starting from version 4.0 replacing OpenVZ used in version up to 3.4, included, and full virtualization with KVM. It comes with a bare-metal installer and includes a Web-based management interface.
The name Proxmox itself has no meaning, and was chosen because the domain name was available.
Development of Proxmox VE started when Dietmar and Martin Maurer, two Linux developers, found out OpenVZ had no backup tool and no management GUI. KVM was appearing at the same time in Linux, and was added shortly afterwards. The first public release took place in April 2008, and the platform quickly gained traction. It was one of the few platforms providing out-of-the-box support for container and full virtualization, managed with a Web GUI similar to commercial offerings.
Proxmox VE is a powerful open-source server virtualization platform to manage two virtualization technologies – KVM (Kernel-based Virtual Machine) for virtual machines and LXC for containers – with a single web-based interface. It also integrates out-of-the-box-tools for configuring high availability between servers, software-defined storage, networking, and disaster recovery.
Server Virtualization supporting Kernel-based Virtual Machine (KVM) and Container-based virtualization with Linux Containers (LXC).
Proxmox VE can be clustered across multiple server nodes.
Since version 2.0, Proxmox VE offers a high availability option for clusters based on the Corosync communication stack. Individual virtual servers can be configured for high availability, using the Red Hat cluster suite. If a Proxmox node becomes unavailable or fails the virtual servers can be automatically moved to another node and restarted. The database- and FUSE-based Proxmox Cluster filesystem (pmxcfs) makes it possible to perform the configuration of each cluster node via the Corosync communication stack.
At least since 2012, in an HA cluster, live virtual machines can be moved from one physical host to another without downtime. Since Proxmox VE 1.0, released 29.10.2008 KVM and OpenVZ live migration is supported.
Containerization and Cloud Computing– The Future
OpenShift is a family of containerization software developed by Red Hat. Its flagship product is the OpenShift Container Platform—an on-premises platform as a service built around Docker containers orchestrated and managed by Kubernetes on a foundation of Red Hat Enterprise Linux.
OpenStack is a freeopen standardcloud computing platform, mostly deployed as infrastructure-as-a-service (IaaS) in both public and private clouds where virtual servers and other resources are made available to users. The software platform consists of interrelated components that control diverse, multi-vendor hardware pools of processing, storage, and networking resources throughout a data center. Users either manage it through a web-based dashboard, through command-line tools, or through RESTful web services.
When You Simulate/Emulate a Whole Computer from the 80s or 90s (we could see this as a form of Virtualization if We want to)
SIMH is a highly portable, multi-system simulator.
SiMH 4.0 brings a lot of new features and simulators for various systems
SIMH implements simulators for:
- Data General Nova, Eclipse
- Digital Equipment Corporation PDP-1, PDP-4, PDP-7, PDP-8, PDP-9, PDP-10, PDP-11, PDP-15 (and UC15), VAX11/780, VAX3900
- GRI Corporation GRI-909, GRI-99
- IBM 1401, 1620, 7090/7094, System 3
- Interdata (Perkin-Elmer) 16b and 32b systems
- Hewlett-Packard 2114, 2115, 2116, 2100, 21MX, 1000, 3000
- Honeywell H316/H516
- MITS Altair 8800, 8080 only
- Royal-Mcbee LGP-30, LGP-21
- Scientific Data Systems SDS 940
- Xerox Data Systems Sigma 32b systems
Stormasys – Charon Product Line – Best in Legacy HW Emulation and in Corporate Greed
Blogs / Links about Virtualization (just the tip of the iceberg)
Lot of additional video examples from Youtube ( to go down the rabbithole)
Telegram Chat Group
Episodes in FLAC
VOIP // PSTN
+1 910 665 9191