Hypervisor

From Omnia
Jump to navigation Jump to search

Type 1 Hypervisor

Type 1 is Bare Metal

Type 1 Linux Hypervisor

A hypervisor (also known as a virtual machine monitor) is computer software that creates and runs virtual machines. The hypervisor performs the function of controlling the host processor and resources, determining their allocation to the guest operating systems. The hypervisor is a very practical way of getting things virtualized quickly and efficiently.

There are two types of hypervisor. A Type 1 hypervisor is known as native or bare-metal. With this type, the hypervisor runs directly on the host’s hardware to control the hardware resources and to manage guest operating systems. In other words, the software hypervisor does not require an additional underlying operating system.

The second type of hypervisor runs under a conventional operating system environment as a second layer, with the guest operating systems then running at the third level.

ref: https://www.linuxlinks.com/hypervisors/

KVM

Kernel Virtual Machine
https://linux-kvm.org/page/Main_Page
KVM (for Kernel-based Virtual Machine) is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). It consists of a loadable kernel module, kvm.ko, that provides the core virtualization infrastructure and a processor specific module, kvm-intel.ko or kvm-amd.ko.
Using KVM, one can run multiple virtual machines running unmodified Linux or Windows images. Each virtual machine has private virtualized hardware: a network card, disk, graphics adapter, etc.
KVM is open source software. The kernel component of KVM is included in mainline Linux, as of 2.6.20. The userspace component of KVM is included in mainline QEMU, as of 1.3.
Blogs from people active in KVM-related virtualization development are syndicated at http://planet.virt-tools.org/

---

KVM – full virtualization solution

by October 15, 2023 Steve Emms Internet

KVM (for Kernel-based Virtual Machine) is a full, open source, virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). It consists of a loadable kernel module, kvm.ko, that provides the core virtualization infrastructure and a processor specific module, kvm-intel.ko or kvm-amd.ko. KVM also requires a modified QEMU although work is underway to get the required changes upstream.

Using KVM, one can run multiple virtual machines running unmodified Linux or Windows images. Each virtual machine has private virtualized hardware: a network card, disk, graphics adapter, etc.

The kernel component of KVM is included in mainline Linux, as of 2.6.20.

Features include:

  • QMP – Qemu Monitor Protocol; a JSON-based protocol which allows applications to communicate with a QEMU instance.
  • KSM – Kernel Samepage Merging – lets the hypervisor system share identical memory pages amongst different processes or virtualized guests.
  • Kvm Paravirtual Clock – a Paravirtual timesource for KVM which alleviates the problem where CPUs do not have a constant Time Stamp Counter.
  • CPU Hotplug support – add CPUs on the fly. CPU hotplug is a controverse feature.
  • PCI Hotplug support – add PCI devices on the fly.
  • vmchannel – communication channel between the host and guests.
  • migration – migrating Virtual Machines.
  • SCSI disk emulation.
  • Virtio Devices – Paravirtualized drivers to give a common framework for hypervisors for IO virtualization. Virtio was chosen to be the main platform for IO virtualization in KVM. This supports a paravirtual Ethernet card, a paravirtual disk I/O controller, a balloon device for adjusting guest memory usage, and a VGA graphics interface using SPICE or VMware driver.
  • CPU clustering.
  • High Precision Event Timer.
  • device assignment.
  • pxe boot.
  • iscsi boot.
  • x2apic – an x86 feature that improves performance, especially on large systems.
  • floppy.
  • cdrom.
  • USB.
  • USB host device passthrough.
  • sound.
  • Userspace Irqchip emulation.
  • Userspace Pit emulation.
  • Balloon memory driver.
  • Large pages support.
  • Stable Guest ABI.
  • VMCS shadow support.
  • APIC virtualization and posted interrupt hardware support for x86 virtualization.
  • VMCS shadowing, where non-root VMREAD/VMWRITE will not trigger VM-Exit.
Website: www.linux-kvm.org
Support: https://www.linux-kvm.org/page/Lists,_IRC
Developer: Red Hat, Inc
License: Various

ref: https://www.linuxlinks.com/KVM/ [1]

oVirt

POWERFUL OPEN SOURCE VIRTUALIZATION
oVirt is a free open-source virtualization solution for your entire enterprise
https://www.ovirt.org/

--

oVirt – virtualization solution for your entire enterprise

by September 9, 2023 Steve Emms Documents

oVirt is a virtualization platform with an easy-to-use web interface. oVirt is built on libvirt which could allow it to manage virtual machines hosted on any supported backend, including KVM, Xen and VirtualBox. oVirt manages virtual machines, storage and virtualized networks.

oVirt Engine is the control center of the oVirt environment. It allows you to define hosts, configure data centers, add storage, define networks, create virtual machines, manage user permissions and use templates from one central location.

The project consists of the engine core (backend server), VDSM agents and a client side user interface (GWT based) and/or RESTful API to control the engine core.

oVirt has three web-based front-ends – for administrators, users and power users (for self provisioning). It also has a REST based API, a Python SDK and a CLI interface, which allows automation of most of its features.

oVirt is an open source software with backing from Red Hat and it is the base for Red Hat Enterprise Virtualization. oVirt is written in Java, over JBoss application server and GWT web framework for its user interface. VDSMd is written in Python.

Features include:

  • High availability.
  • Manage multiple virtual machines.
  • Sophisticated web-based management interface for all aspects of your datacenter.
  • Choice of means of allocation of VMs to hosts: manual, “optimised”, pinned.
  • Live migration of VMs from one hypervisor to another.
  • Add new hypervisor nodes easily and centrally.
  • Monitor resource usage on VMs.
  • Load balancing.
  • Manage quotas for use of resources (storage, compute, network).
  • Self-service console for simple and advanced use cases.
  • Built on KVM hypervisor.
  • Enhanced security: SELinux and Mandatory Access Control for VMs and hypervisor.
  • Scalability: up to 64 vCPU and 2TB vRAM per guest.
  • iSCSI, FC, NFS, and local storage.
  • Memory overcommit support (Kernel Samepage Merging).
  • Developer SDK for ovirt-engine, written in Python.
Website: www.ovirt.org
Support: https://www.ovirt.org/documentation/
Developer: Red Hat
License: Apache License 2.0
oVirt is written in Java.

ref: https://www.linuxlinks.com/ovirt/ [2]

Proxmox

Proxmox Virtual Environment
https://proxmox.com/en/proxmox-virtual-environment/overview
Proxmox Virtual Environment is a complete, open-source server management platform for enterprise virtualization. It tightly integrates the KVM hypervisor and Linux Containers (LXC), software-defined storage and networking functionality, on a single platform. With the integrated web-based user interface you can manage VMs and containers, high availability for clusters, or the integrated disaster recovery tools with ease.
Compute, network, and storage in a single solution
The enterprise-class features and a 100% software-based focus make Proxmox VE the perfect choice to virtualize your IT infrastructure, optimize existing resources, and increase efficiencies with minimal expense. You can easily virtualize even the most demanding of Linux and Windows application workloads, and dynamically scale computing and storage as your needs grow, ensuring that your data center adjusts for future growth.

---

Proxmox – open-source server virtualization management platform

by October 15, 2023 Steve Emms Internet

Proxmox Virtual Environment is an open-source server virtualization management platform.

It is a Debian-based Linux distribution with a modified Ubuntu LTS kernel and allows deployment and management of virtual machines and containers.

Proxmox VE includes a web console and command-line tools, and provides a REST API for third-party tools. Two types of virtualization are supported: container-based with LXC, and full virtualization with KVM. It comes with a bare-metal installer and includes a web-based management interface.

Features include:

  • Maximum flexibility to your production environment.
  • Easy management of compute, network, and storage with the central web interface.
  • 100% software-defined architecture.
  • Two virtualization technologies supported: KVM hypervisor & Linux Container (LXC). Use KVM full virtualization for Windows and Linux images, and lightweight containers to run conflict-free Linux applications.
  • Web-based Management Interface:
    • Integrated – no need to install a separate management tool or any additional management node.
    • Fast, search-driven interface, able to handle thousands of VMs.
    • Based on the Ext JS JavaScript framework.
    • Secure HTML5 console, supporting SSL.
    • Let’s Encrypt TLS certificates via the DNS-based challenge mechanism (or http).
    • Fast and easy creation of VMs and containers.
    • Seamless integration and easy management of a whole cluster.
    • Subscription management via GUI.
    • Integrated documentation
  • Rest API:
    • Easy integration for third-party management tools.
    • REST like API (JSON as primary data format).
    • Easy and human readable data format (native web browser format).
    • Full support for API tokens
    • Automatic parameter verification (verification of return values).
    • Automatic generation of the API documentation.
    • Easy way to create command line tools (use the same API).
    • Resource Oriented Architecture (ROA).
    • Declarative API definition using JSON Schema.
  • Command Line:
    • Manage all components of your virtual environment.
    • CLI with intelligent tab completion.
    • Full UNIX man page documentation.
  • High-Availability (HA) Cluster Manager:
    • No single point of failure (no SPOF).
    • Multi-master cluster.
    • Manage the HA settings for KVM and LXC via GUI.
    • pmxcfs—unique Proxmox VE Cluster File System: database-driven file system for storing configuration files replicated in real-time on all nodes using Corosync.
    • Based on proven Linux HA technologies, providing stable and reliable HA service.
    • Resource agents for KVM and containers (LXC).
    • Watchdog-based fencing.
  • Live Migration.
  • Built-in services: firewall, backup/restore, storage replication, etc.
  • Software-defined storage:
    • Local storage such as ZFS (encryption possible), LVM, LVMthin, ext4, and XFS.
    • Shared storage such as FC, iSCSI or NFS.
    • Distributed storage such as Ceph RBD or CephFS.
    • Encryption support for Ceph OSD and ZFS.
    • Unlimited number of storage definitions (cluster-wide).
  • Complete open-source platform for enterprise virtualization.
  • Live migration
  • Storage replication stack.
  • Virtualized networking:
    • Bridged networking model.
    • Each host with up to 4094 bridges.
    • TCP/IP configuration.
    • IPv4 and IPv6 support.
    • VLANs.
    • Open vSwitch.
  • Hyper-converged infrastructure (HCI) with Ceph.
  • Proxmox VE Firewall:
    • Supporting IPv4 and IPv6.
    • Linux-based netfilter technology. Stateful firewall, provides high bandwidth.
    • Distributed: main configuration in Proxmox VE cluster file system, iptable rules are stored in nodes.
    • Cluster-wide settings.
    • 3 levels of configuration (data center, host, VM/CT).
    • Support for ‘raw’ tables; enable Synflood-Attack protection.
  • Backup and restore:
    • Full backups of VMs and containers.
    • Live snapshot backups.
    • Multiple schedules and backup storage.
    • GUI integrations, but also via CLI.
    • “Backup Now” and restore via GUI.
    • Run scheduled backup jobs manually in the GUI.
    • All jobs from all nodes can be monitored via the GUI tab “Tasks”.
    • Back up VMs with IOThreads enabled.
  • Two-factor authentication.
    • Multiple authentication sources:
    • Linux PAM standard authentication (e.g., ‘root’ and other local users).
    • Built-in Proxmox VE authentication server.
    • Microsoft Active Directory (MS ADS).
    • LDAP.
  • Role-based administration:
    • User and permission management for all objects (VMs, storage systems, nodes, etc.)
    • Proxmox VE comes with a number of predefined roles (lists of privileges) which satisfies most needs.
    • The GUI provides an overview the whole set of predefined roles.
    • Permissions to control access to objects (access control lists). In technical terms they are simply a triple containing <path,user,role>. Each permission specifies a subject (user or group) and a role (set of privileges) on a specific path
Website: proxmox.com/en/proxmox-ve
Support: https://pve.proxmox.com/pve-docs/
Support: https://pve.proxmox.com/wiki/Main_Page
Developer: Proxmox Server Solutions GmbH
License: GNU Affero General Public License, version 3
Proxmox is written in Perl.

ref: https://www.linuxlinks.com/proxmox-open-source-server-virtualization-management-platform/ [3] c

Xen

Xen Project
https://xenproject.org/
THE MISSION OF THE XEN PROJECT IS TO ADVANCE VIRTUALIZATION TECHNOLOGY ACROSS A WIDE RANGE OF COMMERCIAL AND OPEN-SOURCE DOMAINS.
BY PROVIDING A POWERFUL AND VERSATILE HYPERVISOR, THE PROJECT AIMS TO ENABLE INNOVATION, SCALABILITY, SAFETY, AND SECURITY IN VIRTUALIZATION SOLUTIONS.
The Xen Project focuses on revolutionizing virtualization by providing a versatile and powerful hypervisor that addresses the evolving needs of diverse industries. 
Empower Innovation: Tailored virtualization to drive progress across various domains.
Enhance Cloud Ecosystems: Elevate cloud capabilities with high-performing, reliable virtualization.
Secure Critical Systems: Safeguard data and applications through industry-leading security.
Revolutionize Embedded Technologies: Transform embedded and automotive sectors with mature, safe, secure solutions.

---

Xen – open industry standard for virtualization

by October 15, 2023 Steve Emms Internet

Xen is an open source Virtual Machine Monitor (VMM) originally developed by the Systems Research Group of the University of Cambridge Computer Laboratory, as part of the UK-EPSRC funded XenoServers project.

Xen can securely execute multiple virtual machines, each running its own operating system, on a single physical system with close-to-native performance.

The Xen Cloud Platform addresses the needs of cloud providers, hosting services and data centers by combining the isolation and multitenancy capabilities of the Xen hypervisor with enhanced security, storage, and network virtualization technologies.

Features include:

  • Small footprint and interface. Xen uses a microkernel design, with a small memory footprint and limited interface to the guest.
  • Operating system agnostic.
  • Driver Isolation: Xen has the capability to allow the main device driver for a system to run inside of a virtual machine. If the driver crashes, or is compromised, the VM containing the driver can be rebooted and the driver restarted without affecting the rest of the system.
  • Paravirtualization: Fully paravirtualized guests have been optimized to run as a virtual machine. This allows the guests to run much faster than with hardware extensions (HVM). Additionally, Xen can run on hardware that does not support virtualization extensions.
  • Advanced Memory Management:
    • Memory Ballooning.
    • Memory Sharing.
    • Memory Paging.
    • TMEM.
  • Resource Management:
    • Cpupool.
    • Credit 2 Scheduler (experimental).
    • NUMA scheduler affinity.
  • Scalability:
    • 1GB/2MB super page support.
    • Deliver events to PVHVM guests using Xen event channels.
  • Interoperability / Hardware support:
    • Nested Virtualisation (experimental).
    • HVM PXE Stack.
    • Physical CPU Hotplug.
    • Physical Memory Hotplug.
    • Support for PV kernels in bzImage format.
    • PCI Passthrough.
    • X86 Advanced Vector eXtension (AVX).
  • High Availability and Fault Tolerance:
    • Live Migration, Save & Restore.
    • Remus Fault Tolerance.
    • vMCE.
  • Network and Storage:
    • Blktap2.
    • Online resize of virtual disks.
  • Security:
    • Driver Domains.
    • Device Model Stub Domains.
    • Memaccess API.
    • XSM & FLASK.
    • XSM & FLASK support for IS_PRIV.
    • vTPM Support.
  • Tooling
    • gdbsx.
    • vPMU.
    • Serial console.
    • xentrace.
  • Device Models and Virtual Firmware for HVM guests:
    • Traditional Device Model.
    • Qemu Upstream Device Model.
    • ROMBIOS.
    • SeaBIOS.
    • OVMF/Tianocore (experimental).
  • PV Bootloader support:
    • PyGrub support for GRUB 2.
    • PyGrub support for /boot on ext4.
    • pvnetboot support.
  • NUMA scheduling affinity.
  • Openvswitch integration.
Website: www.xenproject.org
Support: https://wiki.xenproject.org/wiki/Category:FAQ
Developer: Linux Foundation Collaborative Project
License: GNU GPL v2
Xen is written in C.

ref: https://www.linuxlinks.com/Xen/ [4]

keywords