Ep.2: Choosing a Hypervisor

In this episode I discuss my experience with and what I think the pros are of ESXi, Hyper-V, and KVM.

Show notes below:

7 Likes

ESXI 5.x (4.x may have as well - don’t remember any more) and 6.0->6.1 disabled HTML config without paid license (don’t think that’s changed). So, you have to use windows vsphere and are supposed to only have a single server per the license.

Limitations:

    You are not able to manage via vCenter Server
    Only 1 physical server is allowed (cluster with more physical servers not allowed)
    Only 2 physical CPU per Hypervisor is allowed
    Only 128 vCPU per Hypervisor is allowed
    Maximum amount of vCPUs you can assign to a VM is limited to 8
    Physical RAM limit of 12TB per Hypervisor is allowed
    You need to acquire VM support

Comes on most adult distros, but… for the links…
https://virt-manager.org/

FWIW - most “home-lab” setups can do networking through the gui… bridge, NAT or even PCIe passthrough (for networking/usb/sound/sata NOT GPU - that (GPU) requires vfio/OS config to support it) is all GUI point and click in the virt-manager gui these days.

A typical “developer workstation” with the server bits enabled as well so that you can build new kernels and such if needed requires about 20G of disk space to not choke on itself over the long term… so a 32G thumb drive for CentOS/RHEL/SuSE/UBNT should do it. You’ll want to remap /var/tmp to ramdisk (tmpfs) to prevent churn on your USB drive. USB3.0 drives that tout 130MB/s speeds provide a more than adequate throughput and life-span as a root partition - this is my default server setup.

1 Like

Oh, 2 more things:

  1. proxmox? unraid? freeNAS? Seems worth discussing some of these in this domain as well - though your point about hitting the big players first is a good one.

  2. “swapiness”. Something I’d forgotten about but ran into with mixing ZFS+KVM recently… KVM/Proxmox and I’d have to assume the others tend towards consumption of swap space even before memory is “over-subscribed”. I believe, don’t quote me on this, this is a feature of “large page allocations” - the tendency of Hypervisors to request large contiguous blocks of memory of the OS if they can for performance rather than one-off pages here and there. As a result, the host kernel says “no” more often when memory is loaded but not full and you get swap usage you might not expect.

This “swapiness” means you need to accommodate either some disk space to handle it or more ram to limit over-subscription or both depending on your needs. So, your USB root partition might be fine, but you might need a disk/ssd/nvme for swap in some cases to keep things running smoothly.

2 Likes

Interesting bit about ESXi! Strange that I must have had some sort of low-tier license because I recall using the browser to manage VMs and networks with 6.x :thinking:

I had considered diving into more than those hypervisors, but I have 0 experience with those. I want to revisit the subject, and I’ll definitely cite the above when going deeper. I primarily wanted to brag a bit about what I’ve used and why I think KVM is “teh bst” :wink:

Fascinating about the swap space. I’ll have to read more into that. I’ve recently started digging into config files of QEMU itself to see how that’s all defined and orchestrated. Pretty wild. Although, I will say that of all the XML/nonsense I’ve seen I much prefer how KVM+QEMU does it lol.

1 Like

I don’t even run swap anymore. I just have a script that tunes my ARC based on free ram.

3 Likes

Run ZFS and KVM at the same time…

Even with quotas on ZFS’ ram usage… it (KVM) leaks out swap under even light stress… (big copy - [beyond cache/quota size])

2 Likes

I do. I have a windows VM that I game on.

I’m using hugepages though, so maybe that’s your issue. Maybe you need to use hugepages.

3 Likes

when you are hype on productivity after your off-the-grid vecation and ep#2 hits

good shit my dude! :100:

2 Likes

I hadn’t been since I had plan on a near or mild over-subscription of ram since I have multiple VMs that never really reach 100% load, they have cores/ram to be responsive to short bursts. So, I want them to be able to release ram for use by other VMs.

With 128G on that machine, I don’t need to be so stingy with ram, so I can try huge-pages.

3 Likes

Finally got to listen to it!

1 Like