Cons of virtualization hyper v. Choosing a hypervisor for virtualization

Oleg Tereshchenko, [email protected] website

Introduction

Before we start talking about virtualization, as always, let's “agree on terminology”.

If we ignore the ancient Roman roots of the origin of the word “virtual”, “virtuality”, then, in our opinion, the word or concept “virtual” came into modern language from theoretical physics. When in the mathematical formula, which was supposed to describe this or that physical phenomenon or process, "ends meet," physicists began to actively use the concept of "virtual" - a conventional value - mass, energy, particle, which helped to bring the formula to "digestible ”Mind.

Later, with the development of computer technologies, the concept of “virtual reality” came into use - the creation of a kind of alternative reality, primarily based on an audiovisual display of a particular computer process. First of all, this applied to computer games and all kinds of virtual tours- on museums, popular resorts, etc.

In this article, we will talk about another relevant concept of virtuality in modern computing systems- virtualization of servers, workstations, storage systems, etc.

Server virtualization

Sooner or later, and for various reasons, everyone begins to think about virtualization.

The question immediately arises about the choice of a virtualization environment. At the moment, there are already a lot of them - "Hyper-V", "V-sphere", "Citrix" and others ... Let's dwell on the first two, since they are the most common and most universal.

"Hyper-V" - developed by Microsoft is already present in all operating systems of its family, starting with Windows 8. If the processor supports virtualization, then you can enable this component and start using. It can also be installed as a separate server with one single task - to serve as a "host" for virtual machines.

"V-sphere" is an independent operating system for organizing a virtualization environment. There are both paid and free versions.

What is it all for

Most servers are underutilized when using the one physical server, one application model. For example, a database server might be heavily loaded, other servers might not. It turns out that at the enterprise or in the data center where it is deployed a large number of physical servers, the load on each of them averages from 10% to 15%. It is not economically viable, efficient and difficult to manage.

Virtualization allows you to reduce the number of physical servers and distribute resources according to workload, allocating more or less resources for any service.

When using physical servers, there is no way for any server to add disk space without stopping the latter, which is inconvenient.

In addition, virtualization can reduce power consumption. 4 physical servers with an average utilization of 10% will consume more power than one server with an average utilization of even 80%.

An important fact is the simplification of the management of the entire IT infrastructure.

For example:

With virtualization, there is a possibility remote access to the virtual server console and changing hardware specifications.

There is no need to purchase separate, expensive devices such as IP KVM switches.

Just go to the console of the server you want and press the "reset" button instead of going to the server room and pressing a button on the server.

It is also possible to take snapshots of the virtual server state.

If something went wrong, say during the upgrade process, we go back to the snapshot taken earlier, and everything works. The procedure does not take much time.

Then, the virtual server has no idea what hardware platform it runs on. This has its advantages, for example, we have a virtual environment on the IBM platform.

We can easily stop the virtual server, move it to a similar virtual environment deployed on the Supermicro or Intel platform, run it there. After that, the server will start and, not noticing the difference, will continue its work, as if it was simply rebooted. This “move” will take a few minutes.

This behavior greatly facilitates the procedure. Reserve copy and subsequent recovery of both data and virtual servers completely (with all parameters, settings and installed software).

Cons and pros

Of the disadvantages of such a system: it is quite possible that you have to buy new server, due to the requirements of the virtualization environment. Because implementing a virtualization solution requires processor support for hardware virtualization technologies, the Intel VT processor, for example.

Many processors that have been released earlier do not have these capabilities, and a "single point of failure" is possible.

For example, there is a virtual environment in which the "AD" server, the "WEB" server and the terminal server are running. At one point, it happens hardware failure virtual environment, power supply failure, for example, is one of the most innocuous. One of the hard drives RAID array, the RAID controller itself or motherboard(similar failures are inherent in all physical servers).

The virtual environment does not start, therefore the virtual servers do not work.

When planning a virtual environment, you need to think about fault tolerance initially, before you begin to use it.

On the plus side, increased security, greatly simplified administration and support, easier backups, quick and easy subsequent recovery, transfer of virtual servers between different platforms, minimal downtime in case of possible failures, saving space, reducing energy costs when using the OS Microsoft Windows Server an opportunity to save on licenses for running virtual servers.

To reflection

Since there is such an opportunity as creating archival copies of everything, anything, "on the fly" ... then you should not keep copies next to the originals, you should move them to some other device. For example, USB-HDD, E-SATA drive will speed up the exchange process.

But storage on storage systems will be much more reliable and functional.

In the latter option, it will be possible to deploy a similar virtualization system, in the event of a crash of the main system, and run copies of working virtual servers from this storage system directly (it will work, of course, slower, although it depends on the storage system, but it will work).

This will allow you to minimize downtime and allow you not to rush to restore the main system.

Among other things, if the storage system allows for its technical specifications, then it is possible to place virtual machines on it, instead of being placed on the virtualization server, which in turn will allow the virtualization server to have no disks at all.

There I jumped a little from one aspect to another. =)

Look ...

You are right in the sense that anyway, in a small office, either a cluster is created, or there is one point of failure in the form of a physical server on which the hypervisor is running. It is foolish to argue with this. In addition, even in the case of a cluster, in most cases, there is still a single point of failure in the form of storage on which the data is physically located. Simply because replicated SANs and the like are not a discussed solution for small and medium businesses at all. There, prices go already for hundreds of thousands of dollars only for storage systems plus licenses.

The nuance is that there are three main options:

  • You have a hypervisor and N virtual machines on it
  • You have N physical servers
  • You have one physical server with one operating system(no virtualization) and everything is installed in this OS.

In the case of the third option (the most terrible), you get problems a priori. You cannot predict the load, you lack security as such (because you probably need to give users access to the server, which is also a domain controller), and your applications affect each other. Well, for example, from life: "odines" gobbled up 100% of the CPU - everything got up, simply because everything is on one copy of the OS.

The second option usually leads to the purchase of several very cheap (relatively) computers, which are proudly called "server". I've seen this many times. Client computers are essentially large quantity resources and server OS on them. The reliability of such computers is adequate. They are simply not designed to operate continuously under load. I'm not even talking about the quality of components and assembly. With all that it implies. If you can buy several branded servers (as many as you need), you are in luck and most workers in "small businesses" envy you fiercely.

Well, the first option. If you only need to buy one server, you can almost always justify a higher budget for it. Explaining that buying it once will eliminate the need to purchase new servers, say, in the next two years. And it will be possible to buy a server from a normal manufacturer (HP \ DELL, etc.), which will have a normal hardware RAID, a component base of normal quality, and so on. Plus - he will have normal warranty support. If you use the appropriate RAID level, you are protected from data loss in the event of a disk failure (or even two). And the failed disk will be replaced under warranty. Also, under the guarantee, everything else will be changed for you (although the "rest" fails in decent servers much less often, for many years I remember only a couple of cases when components failed). But again, you will be spared from looking for "the same motherboard", because everything will be covered by the guarantee.

That is, the reliability is significantly higher, there are fewer risks.

Everything that is written after "It is enough to buy one sufficiently powerful server" refers to the second issue - the compatibility of applications and their mutual influence on each other. Which is much more often a problem than the reliability of the equipment itself. You will be able to retrieve your data from a backup (you do backups, right?) In the event of a hardware failure. But in many cases, you will not be able to solve the problem of compatibility and mutual negative influence of software on each other without buying a new server (that is, without financial investments).

Which risk is higher: hardware failure or software incompatibility? What, if you have a normal backup, is worse - a burned-out server or a malicious program that interferes with the work of others, but you cannot get rid of it (for example, it is necessary for some department to work with the software)?

Virtualization is not a silver bullet; it will not solve all problems at once. And it doesn't need to be implemented just because it exists. But you shouldn't give it up without considering all the advantages.

I hope this is clearer.

Those who are faced with virtualization for the first time have a logical question - How to choose a suitable hypervisor?

After all, a hypervisor is the core of a virtual server, the further operation of virtual machines and their services depends on its efficiency, capabilities, reliability, cost.

At one time, I read many different reviews, testing the speed of work, watched video reports on the work of hypervisors.

If you do not go into the jungle and compactly describe the pros and cons of the most popular hypervisors, and these are VMware ESXi, Microsoft Hyper-V and XenServer. I liked the answer on the Toaster user under the nickname Evgeny_Shiryaev.

Pros and Cons of Microsoft Hyper-V, VMware ESXi, and XenServer

Microsoft Hyper-V

1. The hypervisor itself costs nothing, you can download it from the Microsoft website (in the form of Hyper-V Server);
2. Well suited for Microsoft OS virtualization;
3. Most Microsoft products support working in a virtual Hyper-V environment;
4. Easy to install and configure;
5. Most system administrators know how to work with it;
6. Can be installed on any server on which Windows can stand.

1. Poorly suitable for non-Microsoft OS virtualization (ie not Windows);
2. Advanced administration tools (Virtual Machine Manager) are paid;
3. For each copy of Windows you will have to pay inside the hypervisor (this is if you are using a Hyper-V Server product, if you are using the Hyper-V OS role Windows Server 2008 R2 Datacenter, you don't have to pay for copies of Windows running in a virtual environment).

VMware ESXi

1. From a technical point of view, the most advanced hypervisor;
2. Free (can be downloaded from the VMware website);
3. Supports many operating systems internally (Windows, Linux, BSD, Solaris, etc.);
4. Easy to install and configure.


2. Can be installed only on a limited number of servers;
3. You will have to pay for each copy of Windows inside the hypervisor;
4. Not all system administrators know how to work with it.

XenServer

1. Supports many operating systems internally;
2. Free;
3. Supports a fairly large number of servers.

1. Advanced administration tools are paid;
2. You will have to pay for each copy of Windows inside the hypervisor;
3. Most system administrators did not work with it.

Conclusions on choosing a hypervisor:

- If you want to run OS and software from Microsoft in a virtual environment, choose Hyper-V.

- If you want to run various operating systems (Windows, Linux, Solaris, etc.) in a virtual environment and your servers are included in HCL ESXi, choose ESXi.

- If you want to run Linux and OSS in a virtual environment, and at the same time you have specialists who can work with it, choose XenServer.

Everything is clear, I agree.
I chose for myself free version ESXi hypervisor, just right. Although it was not possible to make friends with ESXi normally with FreeBSD - there is a noticeable loss of performance, however Linux (Debian, CentOS fly)

Even with a cursory consideration of VPS rental offers, the abundance of virtualization systems offered by hosters is striking. Among them are OpenVZ, Virtuozzo, Xen, KVM, Microsoft Hyper-V, VDSmanager and various modifications of these technologies. Each provider gives a lot of advantages of the system they use, but at the same time few people compare virtualization technologies with each other and talk about the disadvantages.

In this article, we will fill this gap and objectively consider the main virtualization technologies used by hosters, which will help beginners make the right choice to rent a virtual dedicated server.

Software and hardware virtualization

The virtualization technologies used in hosting can be divided into two types - software virtualization and full (hardware) virtualization.
The first group includes OpenVZ, Virtuozzo, VDSmanager, and the second includes Xen, KVM and Hyper-V from Microsoft.

Software virtualization implies virtualization at the operating system (OS) kernel level: all virtual machines use a common modified server kernel. At the same time, for the user, each virtual machine looks like a separate server.

Since a common kernel is used, OSs on virtual machines can only use that kernel. If we are talking about Linux VPS based on software virtualization technologies, the VPS user has access to any Linux distribution to choose from (CentOS, Debian, Ubuntu and so on). When it comes to Windows VPS, users can only rely on a server with the same Windows version that is installed on the hypervisor. Today it's like windows rule Server 2008.

The indisputable advantages of software virtualization are the speed of the virtual machines. Creating a VPS, reinstalling the OS, booting the server and similar operations take not even minutes, but seconds. In addition, due to the saving of node resources - the kernel is loaded once and is used by all VPS, the cost of such VPS is lower than the cost of a VPS based on technologies with full virtualization.

Cons - insufficiently strict division of resources and the possibility of overselling. However, with today's level of servers used for virtualization, this problem is diminishing. The typical configuration for today's virtualization server is as follows:

Processors: 2 x Intel Xeon E5620 (8 physical cores)
RAM: 48-96GB ECC Reg
Disk system: 4 x 450 GB SAS Hardware RAID 10 (about 5 times the performance of SATA drives)

Such a node allows placing up to 50 servers with the following parameters without any special inconvenience for users:

Processor: 1800-3600 MHz
RAM: 2048-4096 MB
HDD: 20-40 GB

When choosing a VPS on software virtualization, you should never chase cheap offers - they usually mean that the hoster is overselling (selling more resources than it has). The normal price of an average VPS in terms of resources, as indicated above, is from $ 15-20 monthly.

Hardware virtualization is virtualization at the hardware level, a kind of fair "cutting" of one powerful server into several weaker machines. Each server is completely isolated from its neighbors; almost all resources are limited.

The obvious advantages are the higher stability of the virtual machines. Unlike software virtualization, where, even if the hoster does not oversell, excessive load on one container can lead to a problem in the operation of neighboring containers, on hardware virtualization VPS are as independent as dedicated physical servers. Since each machine uses its own kernel, one server can run several VPSs at the same time with any operating systems, for example, Linux, Windows and FreeBSD at the same time. For the hoster, this is of course a more significant plus than for the user, but sometimes users may also need to change the operating system, for example, from Linux to FreeBSD.

Sometimes customers are offered the ability to install the OS from their own ISO images, which is quite convenient for specific needs - for example, deploying a telephony server based on Asterisk.

The disadvantages stem from the advantages - due to the complete isolation of the VPS, and the impossibility of using the same resources by different servers, the hoster can place fewer servers on one node than when using software virtualization. If we consider the above configuration of a node and a VPS, the number of servers that a hoster can place on such a node will be reduced by about one and a half times. This means that the price for the server will also increase.

The operation of a VPS on hardware virtualization does not differ from the operation of dedicated servers, which means that operations such as creating a VPS, installing an OS, rebooting the server will take more than a few seconds, but as long as on dedicated servers. Although if the OS is installed from a prepared template, and not from an image, it will take 3-5 minutes.

Is a VPS worth these disadvantages on hardware virtualization? If you need exactly the amount of resources for which you pay, and complete independence is important, and not compromises - then yes, it is worth it.

Features of each of the technologies for the user

Openvz- free virtualization technology used by most hosting providers and supported by many VPS server control panels, both paid (SolusVM, VDSmanager) and free (HyperVM, OpenVZ Web Panel).

OpenVZ is actively developing, and is the first to receive all the innovations, which, after running in Virtuozzo Is a commercial version of OpenVZ, developed and promoted by Parallels Corporation as the optimal VPS hosting platform.

Of course, the commercial Virtuozzo technology is more stable and easier to manage for the user (take the same Parallels Power Panel, which is included in the system distribution kit and comes with containers), but at the same time, this technology is not a cheap pleasure. VPS based on Virtuozzo at their cost border on VPS on hardware virtualization, while VPS on OpenVZ is significantly cheaper - almost twice. In addition, today's VPS management tools on OpenVZ make working with VPS data for users quite acceptable and even convenient.

It is worth noting that there is a version of Virtuozzo for Windows, which works in the same way as for Linux.

FreeBSD, unfortunately, is not supported by Virtuozzo or OpenVZ, but most modern Linux distributions are supported by both systems.

VDSmanager is a software virtualization technology for FreeBSD, which then grew into a universal control panel that now supports other virtualization technologies, in particular KVM, Xen and OpenVZ.

However, choose this technology virtualization is better if you need a VPS with software virtualization on FreeBSD.

For Linux VPS, it is better to choose other solutions - they are more stable and, as a rule, more functional. One of the best options is OpenVZ with SolusVM. Not too far behind is the recently appeared free OpenVZ Web Panel, which is already beginning to be actively used by hosting providers.

Xen and KVM from the user's point of view are almost the same in terms of functionality and performance. However, it is worth noting that Xen, which entered the VPS hosting market earlier, today is already evolving from a VPS platform to a cloud platform. For example, a separate cloud-oriented distribution has already been formed - Citrix XenServer.

KVM it also has some advantages - for example, it is an integral part of the kernel, and not a module like Xen, and, accordingly, is more actively developed along with the development of distributions, in particular, Redhat-based systems. ISPs are seeing this trend and are migrating from Xen to KVM.

Therefore, if you need a hardware independent VPS with Linux or FreeBSD, we recommend making a choice in favor of KVM, with an eye to the future.

Xen / KVM server management tools are also sufficient. We consider SolusVM to be one of the best options - a universal panel for OpenVZ, Xen and KVM VPS, which occupies about 90 percent of the foreign VPS market, and is already actively implemented by domestic providers.

Hyper-V- hardware hypervisor from Microsoft. Today it is considered to be the best solution for server virtualization with Windows OS, and is actively implemented by hosting providers.

The best option for a hardware VPS with Windows on board, but not the most The best decision for VPS with Linux or FreeBSD. For this reason, most hosters position Hyper-V as virtualization for Windows VPS.

VMware- expensive commercial hardware virtualization technology, which is used today mainly for cloud VPS (the user can change the amount of available resources on the fly, then paying for the amount that he used). Traditional VPS based on WMware are very rare due to the cost of this technology. Note that virtual VMWare machines easy to transfer between physical nodes without stopping.

There is no ideal virtualization system for VPS hosting, and, probably, it cannot be. Each system is good for its tasks: if you need a VPS that is fast to manage and work at the lowest price, but convenience and stability are not critical, OpenVZ is the best choice. Appreciate stability and comfort, but want the benefits of software virtualization? So Virtuozzo is your choice. KVM is perfect for those who need an honest "piece" of a dedicated server, but the project is not yet mature enough to rent a whole server, and so on.

It's no secret that Information Technology are developing rapidly. It would seem that very little time has passed since the release of Windows Server 2008 R2, and Microsoft has already released new version its server operating system - Windows Server 2012. Hyper-V, which is part of the server operating systems Windows systems, also stepped forward strongly. This article will describe the benefits of using virtualization and some of the features that are only available when using hyper-V based on a Microsoft Windows Server 2012 product.

What is a virtual machine?

A virtual machine (VM) is a software environment that a guest operating system presents as physical hardware. For a virtualization server, a guest virtual machine is a virtual hard disk* .VHDX and * .XML configuration file. With the help of virtualization, we can simultaneously run several operating systems on one computer, which will not have access to each other's resources, while the work of each operating system will not differ from the work on physical hardware.

How is working with virtual machines different from working with physical machines?

  • The ability to easily undo changes to the operating system on a virtual machine using snapshot technology;
  • Possibility to deploy backup a virtual machine anywhere (in the cloud (for example, Microsoft azure) or on your backup server), while you are relieved of the hassle of reinstalling drivers;
  • The ability to perform hardware upgrades for the target OS without changes on the side of the virtual machine due to the presence of a hypervisor layer between the target OS and the hardware;
  • The ability to build fault tolerance of your services at the virtual machine level as a whole, rather than at the end application level, which leads to cost savings, because fault tolerant applications are quite expensive.

Pros of virtualization

  • Virtualization is a fault tolerance tool. It allows you not to buy a huge number of servers, increasing the number of points of failure, and hence the likelihood of failure, but to keep all services on one or two servers.
  • Virtualization is a cost-saving tool:
  • You save on hardware by buying one server instead of 10;
    • You save on electricity, since one server, 100% loaded, consumes much less electricity than 10 servers, 10% loaded;
    • You save on uninterruptible power supplies;
    • You save on the cooling system of the server node, since a loaded server heats up less than 10 working, but not loaded;
    • You save on server maintenance time. So, for example, when replacing a server with a more productive one, there is no need to reinstall all software, you just need to transfer two files to removable media and click on the start virtual machine button.
What's New in Hyper-V 2012

Dynamic memory

Now you can host more virtual machines on one virtualization server by reducing the amount of allocated random access memory for each virtual machine. You will be able to do this by the appearance of a new parameter of the virtual machine called "RAM to start". In order to evaluate the RAM gains when migrating to Hyper-V 2012, you need to evaluate the RAM consumption scenario for each virtual machine.

The graph shows the most common form of memory consumption versus time. As we can see, during the OS boot, memory consumption increases sharply, then it decreases almost twice and is delayed by this level... If you are using Windows Server 2008 R2, then in the settings of the virtual machine, whose graph is presented above, you would have to grudgingly allocate 1600 megabytes of RAM to it, while the top 800 megabytes are needed only to start the virtual machine, and, in fact, are not used by it.

If the consumption of RAM by your virtual machines looks the same, then you are shown an urgent migration to Microsoft Windows Server 2012! This will allow you to allocate only 800 megabytes of RAM for this VM. "How? - you ask. "After all, the VM just won't start because of the lack of OP?" Very simple. With the appearance of the parameter "RAM to start" you only need to specify the amount of RAM that will be allocated at the start of the VM. When the guest OS is fully loaded, the hypervisor will reduce the RAM allocation to the value specified in the VM parameters. An attentive reader may have questions: "What happens if I need to restart the VM, but there is no free OP left?" "Where will the hypervisor get the missing amount of RAM to start the VM from?" If you know how the swap file works, then most likely you have already answered this question. Indeed, the hypervisor will allocate the missing amount of RAM on the hard disk, and when guest system will boot, free up disk space.

VM replication

Virtual machine replication is a mechanism that Microsoft is positioning as a disaster recovery tool. Speaking in simple words Replication is a mechanism for backing up virtual machines built-in by the Hyper-V platform itself. Using replication allows you to keep a backup of a virtual machine always ready for use.

Pros of virtualization:

  • Very simple failover algorithm (actions in case of problems with the main server).
  • Built-in replication health monitoring.
  • Built-in tools for testing backup health.
  • Replication is not demanding on the bandwidth between the primary and backup servers.
Cons of virtualization:
  • Loss of data changed between sync sessions
  • Manual transition to the backup server.
Virtualization Implementation Tips

Before you start implementing a virtualization system, you need to understand why you need it, what tasks you want to solve. One of the most common mistakes in implementing a virtualization system is experimenting with "combat" services within a company. For example, implementing a virtualized environment with only one server that supports virtualization technology will increase risks. Those. if the server hardware fails, there will be no access to virtual machines that were launched on this server.

Before starting the design of server node virtualization, you need to understand the risks and limitations of using virtual machines in your company, understand how the new opportunities will be used and how this will affect the IT infrastructure of the company as a whole. The project plan must include the stage of training specialists and plan for updating and creating regulatory documents for maintaining the server node.