HomeLab Part 2 - The Two Computers


The HomeLab begins

As most people new to this start out, I wanted to run pfSense on spare hardware I had - my college laptop. It was overkill for this and a waste of resources though, and I’d found virtualization to be the solution to everything over the past few years, after having experimented with VirtualBox in my college years.

Around this time (January 2019), my friend Illustris had been testing out ESXi, a baremetal hypervisor, to run pfSense on his similarly overkill laptop, along with other workloads.

Unfortunately, my laptop was not as “spare” as I wanted it to be - my family still needed to use Windows 10 on it, so I couldn’t install the hypervisor-only ESXi on it as the primary OS. While one could certainly run Windows as a guest OS in ESXi on the laptop, my CPU didn’t support passthrough of the Intel integrated graphics GVT-g. The GT650M is configured as a muxless Nvidia Optimus setup in my laptop - it has no display ports connected to it directly, it just renders to the outputs connected to the Intel iGPU. This means I couldn’t easily use Windows as a guest on my laptop with the experience my family would expect - they don’t want to have to mess around with remote desktop and stuff to use the guest OS. It’s tricky enough to get the GPU passed at all, let alone be able to use the laptop display from a guest VM.

[Also, Asus’s AuraSync RGB control panel software only works when on bare-metal Windows, since it interfaces with the motherboard through i2c. The software refuses to load to control other devices if the motherboard is not detected, which it won’t be when inside a virtual machine. To be able to control the RGB on my desktop, I wanted to stay on Windows 10 as a host OS.]

Which hypervisor?

ESXi is a Type 1 hypervisor - it runs at a bare metal level, and runs your virtualization workloads with hardware-acceleration and the ability to pass through nearly any physical device on your host to guest VMs.

On the other hand, software like VirtualBox is what is called a Type 2 hypervisor - the software runs on a fully-featured host operating system, as just another program. It still uses hardware acceleration where possible to virtualize your guest, but it is limited in functionality - it cannot pass through arbitrary PCIe devices from your host, because as a userspace program it doesn’t have exclusive control of the hardware. It can pass through some things like USB devices, but not devices like PCIe graphics cards, or entire USB controllers, or host audio controllers, and so on. VirtualBox on linux hosts did claim to have experimental support for PCIe passthrough, but this wasn’t applicable to me since I had to run a Windows host.

Linux’s KVM is interesting - KVM itself could be considered a type 2 hypervisor, since guest VMs run by it are simply yet another program to the host OS. But, it has all the features of a type 1 hypervisor, since it’s part of the kernel - it works with the host to provide full device control and passthrough to guests. This and this are helpful resources on this.

Wait a minute, if Linux can do that, why hasn’t Microsoft built this into Windows? They have - HyperV is a close analog. When you enable HyperV on a Windows OS, the thin hypervisor layer is actually inserted below the current host OS, which is then actually a guest OS on top of HyperV, albeit a special “primary” one. You can then run guest virtual machines with hardware acceleration and even dynamic memory sizing, connected to customizable host network bridges and switches, manageable by a HyperV control panel inside Windows. Is PCIe passthrough possible? Yes! Sadly, Microsoft has locked that functionality to only the Server SKUs of Windows :(. While people do run Windows Server SKUs as primary gaming PCs, I didn’t want to deal with this, so I resigned myself to not being able to pass physical devices to HyperV guests.

The good thing was, nearly everything I wanted to do could be managed without that - HyperV supports creating virtual NICs with VLAN tagging, bridged to specific host network adapters, even when shared with the host OS. Best of all, I could keep running Windows 10 so my family’s usecases would be unaffected - there would just be some RAM taken up by my HyperV guests.


Now that I had a way to run non-Windows things on my laptop and desktop, it was time to get started on the first batch of my homelab bucketlist:

Workloads

pfSense

pfSense is a FreeBSD-based software firewall and router distro, and it comes with enough functionality and features for everyone from beginner home users to mid-level enterprises running multiple datacenters.

Since my laptop only had a single 100mbit port, I bought a USB to gigabit Ethernet adapter to use with pfSense. I assigned a normal virtual NIC configured on the bridge of the host’s USB adapter NIC, to which I connected the ethernet cable that was connected to my ISP’s FTTB router. How this was laid out physically was a bit of an embarrasment - I had a 15 meter ethernet cable from my apartment’s hall, where the WAN drop was, through a short hop via a wall jack, to the laptop in my room. The ethernet cable from my laptop to the room’s switch was snaked over and under my cupboards, and the ethernet cable back to the switch in the hall was taped along the edges of my room and walls. Needless to say, my parents weren’t pleased with the look. I had to stick with it though, until I figured out how to deal with my network layout with VLANs. That’s a topic for a separate post though.

I had given the pfSense VM 2 cores and 1GB of RAM, and a maxed out 100+100 mbps transfer on WAN stayed under 70% CPU usage.

OpenVPN

pfSense bundles a few VPN servers - PPTP/L2TP, and OpenVPN. It’s easy to set up a configured OpenVPN server in the pfSense GUI, and generate client configs for my Android Phone, and OpenVPN clients on Linux and Windows. With OpenVPN bridged at Layer 2, my clients get an IP within my home network’s subnet and thus behave as if they are actually on the network, no matter where I am connecting from. With a Layer 3 OpenVPN configuration, clients get an IP in a separate OpenVPN-specific subnet. This is easier to set up but has some limitations compared to a Layer2 bridged VPN.

FreeNAS

FreeNAS is another FreeBSD-based distro focused on storage management and sharing. It supports complex disk layouts and multiple ways to expose storage over the network, but at the time I created a quick 200GB disk to pass to the FreeNAS VM, to share over NFS and SMB. I would later go on to make a more extensive, redundant physical disk setup, and FreeNAS helped me understand how ZFS is set up and works.

Docker

I’ve been a fan of containerization ever since I worked on it in an internship, and a lot of usecases like torrent clients, web-based file browsers, collaborative notes, and so on, are well packaged in Docker containers. I set up an LXC container on ProxMox with Docker installed inside, and used Docker Swarm with host-mode networking to run various containers. One of them was Portainer, a GUI to manage Docker hosts and their containers.

Side note: I ran into issues getting Docker Swarm’s non-host networking working inside an LXC container. The “virtual” service IP addresses were unreachable, so ports mapped on services in the mesh network were unavailable. I ran host-mode networking as a workaround temporarily.

Elasticsearch

The Elasticsearch-Logstash/Beats-Kibana (ELK) stack is pretty popular for log aggregation and text search, and I work on a few clusters at work. I wanted to run a small cluster at home to store and analyse system logs and performance metrics, like hardware temperatures, CPU clock speeds, power draw.

Other containers

  • Deluge seedbox: A good enough torrent client, and now that I had unlimited, fast internet at home, it was time to seed Linux ISOs and give back to the community.

  • Puppet: At work, I also was introduced to Puppet, a pull-based configuration management system for small-to-large fleets of servers, complete with a Ruby-based DSL and YAML configuration files. As with most things, trying out work tech at home is a great way to become more comfortable with it!

  • Plex: Plex shouldn’t need much introduction - it’s not opensource but is still a great entry-level media server to host your media and stream it to other local and internet devices. I had a lot of media collected and the collection was only going to grow to a dedicated NAS level of hoarding, so it was time to set up Plex on my media.


Between all of these workloads, it finally felt like I was putting my PC and my old laptop to good use. Sure, I was called a Windows fanboy and a heathen by my friend Illustris, for running a HyperV cluster, but that was a price I was willing to pay. For now, atleast. I would soon notice that the “dynamic memory ballooning” feature of Hyper-V didn’t work as well as expected, and setting aside a major chunk of my RAM for non-Windows workloads and not being able to dynamically adjust it was a downer. My laptop also drew more power than I expected it to need with a couple of Linux VMs in HyperV, and it looked like virtualization overhead on the old 3rd gen i7 processor noticeably added to CPU power usage and temperatures. There was no way to run lightweight containers on Hyper-V, and it was pretty inefficient to run full virtual machines. I kept my docker containers consolidated in a single VM, but I didn’t want to stuff in the ELK stack, the puppetserver, and other stuff, on the same VM. FreeNAS and pfSense had to run in separate VMs anyway.

I ran with this setup for a few months, until the events of Homelab Part 3 came around.

Note: This post was written much later than these events