ddclient, Certbot, Incus and an Nginx Reverse Proxy Home Server

Posted on Sun 09 June 2024 in /hacks

This post is a discussion on how I set up my home server, using what technologies and my reasoning for each decision.

As many technology tinkerers do, I have set up a home server for my computing needs. Despite the low number of potential users (little more than my household), containerisation was high on my priority list for a number of reasons. One consideration was some projects I planned on installing were messy, first and foremost HomeAssistant and motion/motioneye. Having these pollute my system would be disconcerting and potentially time-wasting when conflicts of required outdated software arise or crash. Another reason was to reduce backup complexity, as I can take snapshots of individual containers, and the host system separately and save them off separately. The final reason was to ensure that any service I spun up, could equally quickly (if not quicker) be spun down, in the case of not being used, not fitting the need or being too intensive for my setup.

I did want to be able to connect to it when not at home, meaning I would require some Dynamic DNS process (ddclient) to update the DNS server for the domain and a secure way of accessing my server, without the possibility of snoopers. There are a few ways of automatically renewing SSL certificates, but I used certbot as it is what I am familiar with and the name server I had in mind has good (wildcard) support with it.

Host Machine

My host system and all my containers use Debian stable (12 as of the time of writing), as this distribution provides a solid set of utilities, on a very stable base, which is paramount to server uptime. My host machine also is running off a NVME SSD which provides fast disk access, but it is relatively small. It has two additional HDDs that it uses for the incus storage pools and the shared directory. The shared directory hosts my large data files, that are positioned to be written once, and read many times. Because of this design, I was able to use the much cheaper 'shingle' disk drives. These drives have been through (well deserved) controversy in recent years, due to their limited number of writes before failure. But as their original purpose was for archiving, that is almost identical to what I use them for. This shared directory is shared with the containers (as described below), one of which is running samba, allowing my less technical friends (that I allow on my network) to view, modify or upload to the shared network. This is useful for sharing photos or videos. An important package to install on any server (host and containers) is unattended-upgrades which is a python program that will ensure your server (that you might not touch for years) won't fall out of date and have massive security holes due to this. Assuming you have chosen a stable distribution (like Debian), this is a good idea.

Why Incus?

Since the internal collapse of LXD at Cannonical, and the phoenix-like rise of incus (the LXD fork), I am using incus for my Linux container controller. Of course, no controller is required you could do all this with straight LXC, but I like the very accessible nature of incus. Another advantage of containerising everything and providing access to those containers through reverse proxies, is that the encryption is done in one place; the packets are encrypted from the client, to nginx, and then unencrypted across the virtual network. This has almost no reduction in security, and makes it much easier to start new services without having to worry about setting up more encryption. Taking this even further, if I was to set up another server in my network and move containers onto that, I could redirect the address to that instead, allowing for a more flexible arrangement. Also, I have some services (for example MQTT, samba) which are not required for outside (World Wide Web) accessibility, but are required inside my local network. As these are on ports other than 80 and 443 (which are forwarded from the router), these services could have been reverse proxied with nginx streams. However, I think using nginx for non-public stream hosting I found added confusion, so instead I used

incus config device add mycontainer myserviceport proxy listen=0.0.0.0:5432 connect=127.0.0.1:5432

Which forwards the host machines on port 5432 to the port 5432 on the container. Alternatively, if you want to get more involved, you could use iptables, however be cautioned, mixing of the two will not act as expected as incus manipulates the routes at a lower level than iptables and so iptables rules will be ignored.

I added the shared directory to the containers similarly

incus config device add mycontainer shared-directory disk source=/mnt/shared path=/mnt/shared

The shared directory was often only required to read only for the containers, but some containers (e.g. samba) required write access. For the containers to be able to write to the directory I had to user and group ID map the containers to a user I named incus_files and chown'd the directory recursively for that user and group.

Why Nginx?

The war between nginx and Apache rages on. I chose nginx due to the modern configuration, the fast responsiveness and the wide use. Many of these can be claimed for Apache, but it has started showing its age with newer functionality taking longer and longer to appear. Nginx is installed on the host system and there are separate configuration files installed in /etc/nginx/sites-available for each of the sites I want Nginx to handle. They are symlinked into sites-enabled and included in /etc/nginx/nginx.conf with a wildcard for all config files in that directory. In the base /etc/nginx/nginx.conf I also have my maps (for https_upgrade etc.), the default behaviour if a server name isn't matched (return 404) and a block for suspicious IPs and user agents. Lots of the SSL information can be set up in the main http context, but I have two different domain names currently pointing at the same system, meaning the SSL certificates are set per server context. I have set up the systemd-resolved integration for incus, but I don't use the resolved name (just the virtual network IP) in the nginx configuration, mainly so I don't have to change the systemd unit file for nginx to wait for incusd and systemd-resolved to finish before nginx starts.

DNS

The first step was to set my name servers for my domain to use the LuaDNS nameservers. Due to time taken for DNS propagation, this can take up to 24 hours.

I decided on using LuaDNS for my name server as they provide: a good interface for certbot; an easy configuration for ddclient; and also a fairly unique git integration that meant my DNS rules were in source control.

The first step was setting up the DNS rules. I started with having an A record for the root (@) of my domain going to my IP (inputted manually this time), then having a CNAME record for wildcards (*.) my domain to point at my domain, i.e:

Record Type Name Content
A mydomain.net 1.2.3.4
CNAME *.mydomain.net mydomain.net

Now when I update the IP, it will only be to only rule.

Setting up ddclient was very easy with LuaDNS, I simply requested an API key, allowed in settings for the API to modify my account, and then followed these instructions to set up my ddclient config. After testing the ddclient manually and getting good and seeing the IP update, I enabled the ddclient.service systemd unit and wrapped that up.

Certificates

As I was already fairly prepared, setting up the certificates was a walk in the park. Simply running certbot with:

certbot run -d "mydomain.net" -d "*.mydomain.net"

This was simpler than it may have been because I already checked this DNS provider has a good certbot interface, and I was quite wedded to using Letsencrypt as they are one of the few SSL certificate providers that are morally sound. I also checked that LuaDNS would allow wildcard certificates as this makes my spinning up of new services into less of a task. This could have been done with a _acme-challenges.mydomain.net server, but that would not have really been possible on a typical home server setup as there is at least one bridge to the outside world and requiring port 53 for DNS to be forwarded is less than ideal. A final alternative would have been to do it manually, but this wasn't really an option because it is Letsencrypt policy to expire certificates after 90 days, and where you can automate it, do.

Room for Improvements

There are several aspects of my setup I have not delved into here, as I am yet to find a way I am comfortable sharing.

One of these is backups. I currently take backups of the host system (using BTRFS snapshots) and have them encrypted and saved on-site. The containers are also snapshotted and then encrypted but using incus. This produces much larger files than are required, and I am looking at improving how this works. After this is done, I will rent an off-site server to contain my backups as well. The shared directory has no automatic backups due to the size of the directory and my current storage limitations. There are manual backups, but this is definitely a place for improvement.

Another place for improvement would be creating a container overlay for my specific configurations (enabling ssh and copying over my public key, installing unattended-upgrades, etc.) to automatically put over the Debian image when spinning up a new container. I cannot simply use a customised image of Debian and flatten that to an image because then it would not be up-to-date on creation.

Monitoring of the server can be done with something like Nagios, but currently nothing like that is set up.

Tips, Tricks and Notes

  • A large, power server isn't required! The server I set up was on very modest hardware, a pi, however, probably won't cut it. You can get cheap PC's on eBay for about £35 (used) which will be able to this better. It would also have a hard drive which would be much better for the read/writes of the root filesystem needed for logs etc. in a host.
  • When considering hardware, be sure to think about power usage, more powerful machines will often use more power, but older machines will be less efficient with their power.
  • You can block IP's per region. If you won't (often) be accessing the server from outside the country, you can add IP bans on all IP's not from your country. Some DNS providers offer this, but you can also do this with nginx. This would reduce the amount of traffic from scrapers etc. on your server. If you are outside your country and need to access your server, use a VPN.
  • Choosing a good DNS provider can save you a lot of work, make sure to do your research beforehand.
  • Firewalls aren't required for most home servers as they are usually already behind a firewall on the router, and only known services/ports will be accessible. However, it doesn't harm to install one.
  • ISPs often offer an option (usually costing more) for a static IP address. This isn't required, and I would say this is bad. A change in IP address can be easily caught by DDNS (or DynDNS or dynamic DNS) programs (see ddclient). If you change your domain name, when your IP address changes, the scrapers won't follow you to the new domain.
  • If you want to enable ssh access from outside the network, use a non-default port (not 22!) and disable password authentication, only allow authentication of public keys.