Closet Server Consolidation

Yeah, I had that thought after I first started copying…

Particularly as I run this machine off a USB3.0 thumb-drive root partition… I need to shrink the ZIL and put ~32-64G to swap and /var/tmp


It is very pretty. Reminds me of the one that comes with Fedora Server:




@cekim This is so cool. What are you going to do with it? How do you like EPYC? :smiley:


Epyc is fine… runs nice and cool so far (I expect that from a <3GHz CPU). I’ve been playing around with my 2990wx for a while, so its not a huge surprise.

This machine is going to be a catch-all for VMs (VPN, various external services) and storage in the condo. So, it will do pretty much everything that isn’t running simulations. I previously had an x3650m3 do the VMs and another machine do NFS and then a 40G switch shared between them and computes to move the data around.

In the condo, my desktop is 7980xe so its double-duty desktop and computes - I plan on moving a dual 2696v3 ucode hacked compute node in here too to back up the 7980xe (its not much slower than the 7980xe core-for-core in real-world terms and has 2x the cores). No loud 40G switch for the condo - just p2p using the dual port mellanox card put into the epyc.

The “barn” will eventually house everything that doesn’t make the cut for the condo - so the full rack goes there. 2 more of those 2696v3 rigs, the 2960v4 rig I pulled out of this chasis… All that will go in the as yet unfinished barn (it has a floor now, it needs climate controlled storage).

1 Like

Oh wow, that’s crazy! What VPN are you using? :grin:

Yes? :wink:

More than one.

openVPN is among them… some work, some personal.

1 Like

Looking for ideas to set one up at home lol :smiley: I’ve used OpenVPN with pfSense in the past.

Yeah, I have a pfsense box dedicated to “personal” stuff that runs openVPN… for partial/whole network VPN. I bypass various devices (TV) and nets as needed for those things that VPNs break.

That’s the best way to use openVPN IMUO.

1 Like

During this multi-TB copy, some aspect of this keeps leaking out on to swap despite my limits, so I disabled the ZIL and re-purposed it as swap. Can’t re-partition the optane without a re-boot, so I’ll break that up into ZIL and swap later…

added the optane swap and then disabled the default USB3.0 swap.

KiB Mem : 65855732 total,   264792 free,  9207660 used, 56383280 buff/cache
KiB Swap: 62062684 total, 61948944 free,   113740 used. 54749504 avail Mem 
1 Like


is that top output or free?

CentOS + KVM + ZFSonlinux…

Or did you mean “use proxmox instead?”

64G of memory total
62G of swap total (optane)
264M free memory
61.9G free swap
113.7M swap used

The rest is buffer/cache (zfs/kernel)

1 Like

I’ve noticed proxmox do the same even if swappiness is set to 0 or 1.

So I avoid doing most stuff from proxmox and do it in a VM or remotely.

1 Like

Is this top command or free command

I’ve played with proxmox - has lots of whizbang tools and features to make migration, failover and frequent VM creation easy, but ATM don’t have the time to invest figuring it out again. It’s storage domains took me a while to puzzle through…


turn swap field on in top and sort by it… to see whats using it

hmm, top didn’t want to cooperate, but looking through /proc I see qemu is the biggest culpret…

I’m guessing it’s allocation looking for large blocks of contiguous pages is bypassing the kernel’s opportunistic free of buffer-cache pages under stress, so despite the “availability” of memory, its going straight to swap sometimes…


7915 32268 kB /usr/libexec/qemu-kvm-name


1 Like

Next time you are in top…
Play with lower case e and upper case E.

Sorry for the delay in filling out the details - seems copying a dozen or so terabytes (I cleaned up a little) takes a while. :wink:

Looks like we are a little under 1/2 way done copying what is now 16T uncompressed.

So far the big giant “sparse” files that make up various VMs (multi-terabyte qcow sparse files with multi-terabyte data that’s mostly compressed) are harshing my compression:

 zfs get all volume1 | grep compressrat
volume1  compressratio         1.08x                  -

I expect that to improve a bit once the full copy is done.

The bottleneck in the copy is the source volume is a slower array - only 5 10TB drives… It’s rather effectively cached by 4 512 SSDs in synology, but that doesn’t help this bulk imaging… So we wait…

The synology output is 500-700MB/s for “warm data” in the cache, but limited to spindle speeds on a bulk copy… so… it bounces between 50 and 300MB/s depending on rust cylinder and the number of small files.