April 28, 2024
In this article written by Rik Goldman author of the bookĀ Learning Proxmox VE, we introduce to you Proxmox Virtual Environment (PVE) which is a matur.......

In this article written by Rik Goldman author of the bookĀ Learning Proxmox VE, we introduce to you Proxmox Virtual Environment (PVE) which is a mature, complete, well-supported, enterprise-class virtualization environment for servers. It is an open source toolā€”based in the Debian GNU/Linux distributionā€”that manages containers, virtual machines, storage, virtualized networks, and high-availability clustering through a well-designed, web-based interface or via the command-line interface.

(For more resources related to this topic, see here.)

Developers provided the first stable release of Proxmox VE in 2008; 4 years and eight point releases later, ZDNetā€™s Ken Hess boldly, but quite sensibly, declared Proxmox VE as Proxmox: The Ultimate Hypervisor (http://www.zdnet.com/article/proxmox-the-ultimate-hypervisor/).

Four years later, PVE is on version 4.1, in use by at least 90,000 hosts, and more than 500 commercial customers in 140 countries; the web-based administrative interface itself is translated into nineteen languages.

This article will explore the fundamental technologies underlying PVEā€™s hypervisor features: LXC, KVM, and QEMU. To do so, we will develop a working understanding of virtual machines, containers, and their appropriate use.

We will cover the following topics:

  • Proxmox VE in brief
  • Virtualization and containerization with PVE
  • Proxmox VE virtual machines, KVM, and QEMU
  • Containerization with PVE and LXC

With Proxmox VE, Proxmox Server Solutions GmbH (https://www.proxmox.com/en/about) provides us with an enterprise-ready, open source type II hypervisor. Later, youā€™ll find some of the features that make Proxmox VE such a strong enterprise candidate.

  • The license for Proxmox VE is very deliberately the GNU Affero General Public License (V3) (https://www.gnu.org/licenses/agpl-3.0.html). From among the many free and open source compatible licenses available, this is a significant choice because it is ā€œspecifically designed to ensure cooperation with the community in the case of network server software.ā€
  • PVE is primarily administered from an integrated web interface or from the command line locally or via SSH. Consequently, there is no need for a separate management server and the associated expenditure. In this way, Proxmox VE significantly contrasts with alternative enterprise virtualization solutions by vendors such as VMware.
  • Proxmox VE instances/nodes can be incorporated into PVE clusters, and centrally administered from a unified web interface.
  • Proxmox VE provides for live migrationā€”the movement of a virtual machine or container from one cluster node to another without any disruption of services. This is a rather unique feature to PVE and not common to competing products.

Features

Proxmox VE

VMware vSphere

Hardware requirements

Flexible

Strict compliance with HCL

Integrated management interface

Web- and shell-based (browser and SSH)

No. Requires dedicated management server at additional cost

Simple subscription structure

Yes; based on number of premium support tickets per year and CPU socket count

No

High availability

Yes

Yes

VM live migration

Yes

Yes

Supports containers

Yes

No

Virtual machine OS support

Windows and Linux

Windows, Linux, and Unix

Community support

Yes

No

Live VM snapshots

Yes

Yes

Contrasting Proxmox VE and VMware vSphere features

For a complete catalog of features, see the Proxmox VE datasheet at https://www.proxmox.com/images/download/pve/docs/Proxmox-VE-Datasheet.pdf.

Like its competitors, PVE is a hypervisor: a typical hypervisor is software that creates, runs, configures, and manages virtual machines based on an administrator or engineerā€™s choices.

PVE is known as a type II hypervisor because the virtualization layer is built upon an operating system.

As a type II hypervisor, Proxmox VE is built on the Debian project. Debian is a GNU/Linux distribution renowned for its reliability, commitment to security, and its thriving and dedicated community of contributing developers.

A type II hypervisor, such as PVE, runs directly over the operating system. In Proxmox VEā€™s case, the operating system is Debian; since the release of PVE 4.0, the underlying operating system has been Debian ā€œJessie.ā€

By contrast, a type I hypervisor (such as VMwareā€™s ESXi) runs directly on bare metal without the mediation of an operating system. It has no additional function beyond managing virtualization and the physical hardware.

A type I hypervisor runs directly on hardware, without the mediation of an operating system.

As a type II hypervisor, Proxmox VE is built on the Debian project. Debian is a GNU/Linux distribution renowned for its reliability, commitment to security, and its thriving and dedicated community of contributing developers.

Debian-based GNU/Linux distributions are arguably the most popular GNU/Linux distributions for the desktop.

One characteristic that distinguishes Debian from competing distribution is its release policy: Debian releases only when its development community can stand behind it for its stability, security, and usability.

Debian does not distinguish between long-term support releases and regular releases as do some other distributions.

Instead, all Debian releases receive strong support and critical updates through the first year following the next release. (Since 2007, a major release of Debian has been made about every two years. Debian 8, Jessie, was released just about on schedule in 2015.

Proxmox VEā€™s reliance on Debian is thus a testament to its commitment to these values: stability, security, and usability over scheduled releases that favor cutting-edge features.

PVE provides its virtualization functionality through three open technologies, and the efficiency with which theyā€™re integrated by its administrative web interface:

To understand how this foundation serves Proxmox VE, we must first be able to clearly understand the relationship between virtualization (or, specifically, hardware virtualization) and containerization (OS virtualization). As we proceed, their respective use cases should become clear.

It is correct to ultimately understand containerization as a type of virtualization. However, here, weā€™ll look first to conceptually distinguish a virtual machine from a container by focusing on contrasting characteristics.

Simply put, virtualization is a technique through which we provide fully-functional, computing resources without a demand for resourcesā€™ physical organization, locations, or relative proximity.

Briefly put, virtualization technology allows you to share and allocate the resources of a physical computer into multiple execution environments. Without context, virtualization is a vague term, encapsulating the abstraction of such resources as storage, networks, servers, desktop environments, and even applications from their concrete hardware requirements through software implementation solutions called hypervisors.

Virtualization thus affords us more flexibility, more functionality, and a significant positive impact on our budgetsā€”often realized with merely the resources we have at hand.

In terms of PVE, virtualization most commonly refers to the abstraction of all aspects of a discrete computing system from its hardware. In this context, virtualization is the creation, in other words, of a virtual machine or VM, with its own operating system and applications.

A VM may be initially understood as a computer that has the same functionality as a physical machine. Likewise, it may be incorporated and communicated with via a network exactly as a machine with physical hardware would. Put yet another way, from inside a VM, we will experience no difference from which we can distinguish it from a physical computer.

The virtual machine, moreover, hasnā€™t the physical footprint of its physical counterparts. The hardware it relies on is, in fact, provided by software that borrows from the hardware resources from a host installed on a physical machine (or bare metal).

Nevertheless, the software components of the virtual machine, from the applications to the operating system, are distinctly separated from those of the host machine. This advantage is realized when it comes to allocating physical space for resources.

For example, we may have a PVE server running a web server, database server, firewall, and log management systemā€”all as discrete virtual machines. Rather than consuming the physical space, resources, and labor of maintaining four physical machines, we simply make physical room for the single Proxmox VE server and configure an appropriate virtual LAN as necessary.

In a white paper entitled Putting Server Virtualization to Work, AMD articulates well the benefits of virtualization to businesses and developers (https://www.amd.com/Documents/32951B_Virtual_WP.pdf):

Top 5 business benefits of virtualization:

Increases server utilization

Improves service levels

Streamlines manageability and security

Decreases hardware costs

Reduces facility costs

The benefits of virtualization with a development and test environment:

Lowers capital and space requirements.

Lowers power and cooling costs

Increases efficiencies through shorter test cycles

Faster time-to-market

To these benefits, letā€™s add portability and encapsulation: the unique ability to migrate a live VM from one PVE host to anotherā€”without suffering a service outage.

Proxmox VE makes the creation and control of virtual machines possible through the combined use of two free and open source technologies: Kernel-based Virtual Machine (or KVM) and Quick Emulator (QEMU). Used together, we refer to this integration of tools as KVM-QEMU.

KVM

KVM has been an integral part of the Linux kernel since February, 2007. This kernel module allows GNU/Linux users and administrators to take advantage of an architectureā€™s hardware virtualization extensions; for our purposes, these extensions are AMDā€™s AMD-V and Intelā€™s VT-X for the x86_64 architecture.

To really make the most of Proxmox VEā€™s feature set, youā€™ll therefore very much want to install on an x86_64 machine with a CPU with integrated virtualization extensions. For a full list of AMD and Intel processors supported by KVM, visit Intel at http://ark.intel.com/Products/VirtualizationTechnology or AMD at http://support.amd.com/en-us/kb-articles/Pages/GPU120AMDRVICPUsHyperVWin8.aspx.

QEMU

QEMU provides an emulation and virtualization interface that can be scripted or otherwise controlled by a user.

Visualizing the relationship between KVM and QEMU

Without Proxmox VE, we could essentially define the hardware, create a virtual disk, and start and stop a virtualized server from the command line using QEMU.

Alternatively, we could rely on any one of an array of GUI frontends for QEMU (a list of GUIs available for various platforms can be found at http://wiki.qemu.org/Links#GUI_Front_Ends).

Of course, working with these solutions is productive only if youā€™re interested in what goes on behind the scenes in PVE when virtual machines are defined. Proxmox VEā€™s management of virtual machines is itself managing QEMU through its API.

Managing QEMU from the command line can be tedious. The following is a line from a script that launched Raspbian, a Debian remix intended for the architecture of the Raspberry Pi, on an x86 Intel machine running Ubuntu. When we see how easy it is to manage VMs from Proxmox VEā€™s administrative interfaces, weā€™ll sincerely appreciate that relative simplicity:

qemu-system-arm -kernel kernel-qemu -cpu arm1176 -m 256 -M versatilepb -no-reboot -serial stdio -append ā€œroot=/dev/sda2 panic=1ā€ -hda ./$raspbian_img -hdb swap

If youā€™re familiar with QEMUā€™s emulation features, itā€™s perhaps important to note that we canā€™t manage emulation through the tools and features Proxmox VE providesā€”despite its reliance on QEMU. From a bash shell provided by Debian, itā€™s possible. However, the emulation canā€™t be controlled through PVEā€™s administration and management interfaces.

Containerization with Proxmox VE

Containers are a class of virtual machines (as containerization has enjoyed a renaissance since 2005, the term OS virtualization has become synonymous with containerization and is often used for clarity).

However, by way of contrast with VMs, containers share operating system components, such as libraries and binaries, with the host operating system; a virtual machine does not.

Visually contrasting virtual machines with containers

The container advantage

This arrangement potentially allows a container to run leaner and with fewer hardware resources borrowed from the host. For many authors, pundits, and users, containers also offer a demonstrable advantage in terms of speed and efficiency. (However, it should be noted here that as resources such as RAM and more powerful CPUs become cheaper, this advantage will diminish.)

The Proxmox VE container is made possible through LXC from version 4.0 (itā€™s made possible through OpenVZ in previous PVE versions). LXC is the third fundamental technology serving Proxmox VEā€™s ultimate interest. Like KVM and QEMU, LXC (or Linux Containers) is an open source technology. It allows a host to run, and an administrator to manage, multiple operating system instances as isolated containers on a single physical host. Conceptually then, a container very clearly represents a class of virtualization, rather than an opposing concept. Nevertheless, itā€™s helpful to maintain a clear distinction between a virtual machine and a container as we come to terms with PVE.

The ideal implementation of a Proxmox VE guest is contingent on our distinguishing and choosing between a virtual-machine solution and a container solution.

Since Proxmox VE containers share components of the host operating system and can offer advantages in terms of efficiency, this text will guide you through the creation of containers whenever the intended guest can be fully realized with Debian Jessie as our hypervisorā€™s operating system without sacrificing features.

When our intent is a guest running a Microsoft Windows operating system, for example, a Proxmox VE container ceases to be a solution. In such a case, we turn, instead, to creating a virtual machine. We must rely on a VM precisely because the operating system components that Debian can share with a Linux container are not components a Microsoft Windows operating system can make use of.

In this article, we have come to terms with the three open source technologies that provide Proxmox VEā€™s foundational features: containerization and virtualization with LXC, KVM, and QEMU.

Along the way, weā€™ve come to understand that containers, while being a type of virtualization, have characteristics that distinguish them from virtual machines.

These differences will be crucial as we determine which technology to rely on for a virtual server solution with Proxmox VE.


Further resources on this subject:


Leave a Reply

Your email address will not be published. Required fields are marked *