So, I’ve recently acquired a new server - coincidentally, the one that this is running on, and I’ve run into a handful of…problems. I’m writing this both to get some peace of mind, and to hopefully help others, who may be in a similar situation. I’ll go through my specs, the build process, my thoughts and what I did to fix all the issues.

Specs

As with every good build, we’ll need some specs to begin with. I’ll be honest with you, this shit probably makes no sense to anyone who actually knows their stuff, but I don’t pride myself in knowing what I am doing. I pride myself in being able to act as if I knew what I was doing - that’s a vital difference. Anyways, onwards to the specs:

  • CPU: Intel i3-10100F
  • Memory: Corsair Vengeance LPX 2x8GB
  • Motherboard: ASRock H510M-ITX/AC mITX
  • Case: Fractal Design Node 304
  • PSU: Corsair CX550M
  • Cooler: Arctic Alpine 12 Passive
  • Storage
    • Samsung 970 EVO Plus SSD 2TB
    • 4x Western Digital WD Red 4TB (WD40EFAX)

So hold up…an i3 for a server? 16 GB of RAM? PASSIVE COOLING?! You may ask yourself what the fuck I am on, and I wish I could tell you that it was good drugs, but unfortunately, it was only boredom. Allow me to explain the rationale behind each choice. The i3 is a huge step up from my old CPU, which was, I kid you not, an Intel Celeron J3455. Now that the laughter has died down a bit, allow me to explain that this server is really here to serve my music collection to myself, teamspeak3 for my friends (or, “friend” rather. I’m not very popular) and some data storage for whatever I have to store. So an i3 really does what I need it to do. As for the passive cooling…it really seems enough for 65W TDP. It’s not like it’s a gaming rig or such, where you have to expect your CPU to be on max power for hours on end (I feel like this “no friends” thing starts to make more and more sense). Usually, my server is pretty idle, so less power is fine. Server never got hotter than 70°C and that was on the SSD.

The RAM also seems to be fine. Never had more than 80% allocation, let alone use. So I might upgrade to 2x16 at some point, but not now. I really see no reason to.

The Case - Node 304

So, I generally like the Node 304, but it’s the little things that bother me. For example, the manual is a fucking joke. They spend more time explaining how much they love minimalist design, like other scandinavian companies, instead of explaining how the fuck they want me to wire up the case fans. In case you wonder: Attach the 4-pin Molex connector of your PSU to the Molex receptor of the case fan control, then connect the three case fans to the fan control pins.

                                                 ┌────────────────────┐
                                                 │                    │
                                           ┌────►│ Large Back Fan     │
                                           │     │                    │
                                           │     └────────────────────┘
                                           │
┌─────┐          ┌────────────────┐        │     ┌────────────────────┐
│ PSU ├─────────►│                ├────────┘     │                    │
└─────┘          │ Fan Controller ├─────────────►│ Small Front Fan #1 │
                 │                ├────────┐     │                    │
                 └────────────────┘        │     └────────────────────┘
                                           │                           
                                           │     ┌────────────────────┐
                                           │     │                    │
                                           └────►│ Small Front Fan #2 │
                                                 │                    │
                                                 └────────────────────┘

Aside from that, the case generally is nice. The case has a 140mm fan in the back and 2x 90mm fans in the front. All of them can be set to three different speeds with a physical switch in the back. That’s pretty neat!

The case also comes with three braces to mount HDDs with, but realistically, you will have to remove one of them, if you plan on installing a GPU or if, like me, your cables are not wireless and you actually need space to wire everything. But four HDD slots are plenty, and my motherboard doesn’t have more than four SATA connectors anyways, so that’s fine.

All in all, it looks aesthetically pleasing and keeps the insides of my hardware inside. What more could I ask for?

The Build

This isn’t a “how to build a PC” post, although I could make one of those if I felt like it. Generally, the build was fairly unimpressive. It begins by disassembling the case with the three (or four?) thumb screws on the back. The black top slides right off and reveals the inside of the case. The PSU is actually mounted in the center of the case, and an extension cable is run inside the case from the edge. This is pretty clever and saves some space.

After the PSU is installed, I took the motherboard out of its packaging and installed the CPU, Passive Cooler and the SSD. I don’t know why, but I fucking love M.2 SSDs! This would come to bite me later.

Then, the HDDs were installed. They’re mounted sideways, so that the “top” of each HDD faces each other. I decided to label them 0, 1, 2 and 3 respectively, and connect them to SATA_0, SATA_1, etc. respectively. That way, if one HDD fails, I can tell which one by number. This truly is big-brain time.

Then it was time to insert the motherboard in the case, which was pretty straightforward. Oh yes, lest I forget, none of the screws were labelled. It was pretty much up to me to figure out which one belongs where. Good job! Anyways, once inserted, I attached the RAM. To be fair, the passive cooler made that pretty fucking difficult, but pain only makes you stronger. The CPU- and Motherboard power connectors were much easier to insert than I originally thought. Finally then the USB case headers and system pins (which you should not insert into the COM1 pins, which look identical - ask me how I know) and we’re ready to power on.

And as you may expect: Power on, no image. None.

Fuck…

Time to Panic

The worst thing to happen when you build a PC, and it has happened to me every single time I built a PC. At least the fans and HDDs were spinning and the system power LED was glowing. So at least some signs of life.

So I did what any sane person would do and immediately got into a phonecall with a colleague of mine to get some technical and emotional support. We went through all the normal steps you’d go through when checking a PC in this state:

  • Are all cables inserted correctly?
  • Is the RAM working?
  • Different HDMI cable
  • DisplayPort?

So yeah, nothing worked. Luckily, Journeyman Geek worked out that the F-series CPUs of Intel don’t have any video output at all. Not just “no Intel HD graphics” or such - literally nothing. No BIOS, no terminal, nothing. You might as well put a DisplayPort cable up your ass and wonder why you can’t do a DIY colonoscopy on your TV.

But my server didn’t need any graphics output once it works. All I needed was an SSH server, so I had to get Ubuntu Server running, setup SSH and then I could do the rest remotely. I decided the best course of action would be to insert my M.2 SSD into my gaming PC, then boot from USB, install Ubuntu Server on the SSD, then switch back. It worked fine, but when I booted the server…no network connection? Why not?!

No Network for my Intel i219-V

I didn’t get any network connection going, so I decided that I had to see it on the system, so plan B was executed. Remove the GTX 1080 from my gaming PC and insert it into my server. And by “insert” I mean “push every fucking cable aside to somehow get space”. It worked, and I finally had some graphical output.

So I checked the network devices with ip a and lo and behold, this was the output:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: wlp2s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 90:cc:df:63:dd:7d brd ff:ff:ff:ff:ff:ff

As you see, my ethernet device isn’t even listed. lshw -C network showed that it was there, but “unclaimed”. I tried many many things, but ultimately, it turned out that the Linux kernel 5.4 shipped with Ubuntu Server 20.04 isn’t configured for this device. This can be fixed by installing the HWE Kernel with the following command:

sudo apt install --install-recommends linux-generic-hwe-20.04

But wait, how can you do that with no network? Luckily, I could use the wlp2s0 Wi-Fi device to connect to my AP and get some network going that way. If you ask yourself why a server distro would contain drivers for a Wi-Fi card, but not an ethernet card…your guess is as good as mine.

ZFS

So now, let me tell you about the backup strategy I had with my old server. Basically, I had two HDDs with 6 TB storage, and whenever I felt like it, I would connect the backup HDD to my server, then remount my primary disk as read-only (sudo mount -o remount,ro /media/Primary in case you wondered) and created a SquashFS image of my disk, then wrote it to the backup disk. This process took about a night, but the neat thing is that even if it took longer, I could still listen to my music while the server was backing up.

But as you can imagine, “whenever I feel like it” is not really a good backup concept, so I wanted something better. Journeyman Geek, the guy who helped me identifying that my CPU really didn’t like visual output, suggested I should use ZFS. So at first, this seemed overwhelming, as all the guides and explanations included like 20 disks - way overkill for what I had. But after a while, I managed to get it.

ZFS Basic Structure

ZFS is actually remarkably simple, once you understand the basics. So basically, ZFS defines one top-level structure, which is the “zpool”. A zpool is simply a collection of devices, not unlike a JBOD RAID. You can mount your zpool to a mount point, write to it, and the zpool writes it to one of the disks. In theory, if you desired, you could create a zpool for one physical disk and just use the neat administration features ZFS offers.

But you’re not a sucker! You want redundancy, performance and all the other amazing buzzwords that sysadmins get erections over. So a zpool by itself doesn’t offer any redundancy. Seriously, none. If one disk dies, then it’s dead and you’re fucked. How exactly, I don’t know. In the best case, some data is gone. In the worst case, all data is gone. So let’s add some redundancy, right?

This is where vdevs - virtual devices - come into play. So it took me a while to understand what a vdev is, and the fact that the Arch Wiki uses the phrase completely wrong doesn’t help. Basically, a vdev is one virtual device, comprising of one or more data sources. These data sources are usually physical disks, but you can also use image files, which the Arch Wiki mistakingly labelled “vdev”.

If none of this shit makes any sense, here is the output of zpool status and all will become clear:

  pool: data
 state: ONLINE
  scan: scrub repaired 0B in 0 days 02:12:51 with 0 errors on Tue Oct  5 19:59:20 2021
config:

	NAME                        STATE     READ WRITE CKSUM
	data                        ONLINE       0     0     0
	  mirror-0                  ONLINE       0     0     0
	    pci-0000:00:17.0-ata-1  ONLINE       0     0     0
	    pci-0000:00:17.0-ata-2  ONLINE       0     0     0
	  mirror-1                  ONLINE       0     0     0
	    pci-0000:00:17.0-ata-3  ONLINE       0     0     0
	    pci-0000:00:17.0-ata-4  ONLINE       0     0     0

So all of this seems pretty complicated, so let me explain from the bottom up: pci-000:00:17.0-ata-* are the physical disks. It’s recommended not to use labels like /dev/sdX, as these may not be predictable. Meaning, that what was /dev/sda in this boot could be /dev/sdc next boot. While this may seem unlikely, using physical paths makes this entire system more predictable. I also labelled all the cables and drive bays with 1-4, just to make sure I don’t remove the wrong disks.

mirror-* are my vdevs. As you can see, I have two vdevs, with two disks each. “Mirror” is essentially a RAID 1 setup.

data is my zpool. When data is written to the zpool, it’s written on any of the available vdevs. Or maybe it’s striped like in RAID 0? Honestly, I don’t know.

The important bit is that there is no redundancy on the zpool level. If one of your vdevs fails, then you are fucked. I know I said that before, but it’s really important! So this setup is quite useful, as it’s basically a RAID 10 setup, meaning that any disk can fail and can be replaced and replicated (called “resilvered” in ZFS lingo) rather quickly.

But mirroring isn’t the only way you can set up your vdevs. There is also plain striping (aka. Your physical disk becomes your vdev), raidz1, raidz2, raidz3 and probably some more. This beautiful blog post on ZFS Performance gives a recommendation on which mode to use for how many disks. So a mirror/mirror setup for 4 disks is not a bad idea. The downside of this redundancy is that you only have 50% storage efficiency. Oh well, can’t have everything.

Get on with it!

Oh, right! The commands! So anyways, here is how I created my zpool:

sudo zpool create -o ashift=12 -m /media/data data \
               mirror \
                  pci-0000:00:17.0-ata-1 \
                  pci-0000:00:17.0-ata-2 \
               mirror \ 
                  pci-0000:00:17.0-ata-3 \
                  pci-0000:00:17.0-ata-4

This generates a zpool called “data”, with the previously mentioned RAID 10-esque characteristics. The -o ashift=12 modifier is used to enforce 4k sector size, since I know that’s what my disks use. -m /media/data sets - can you guess? - the mount point to /media/data, meaning that my zpool will be mounted there on every boot. Neat!

Now, in theory, you can now go to /media/data and just do as you please. But there is one more ace up our sleeves: Datasets. Basically, a dataset is a fancy directory, which you can back up individually. I decided to create one dataset for my media, one for private data (think “random bullshit I download when I am bored”). Creation works as follows:

sudo zfs create data/Jellyfin
sudo zfs create data/Private

sudo chown -R jellyfin:jellyfin /media/data/Jellyfin
sudo chown -R mechmk1:mechmk1 /media/data/Private  

Amazing, I know! That’s really all it took. Now two directories exist in /media/data, each with appropriate owners.

Snapshots!

One amazing thing about ZFS is that you can create snapshots, which are in a way comparable to git commits. You can roll back to a previous snapshot quite easily. But there is one troubling thing: Snapshots need to have a unique name, and they don’t auto-increment. Luckily, there is an amazing tool called zfs-auto-snapshot:

sudo apt install zfs-auto-snapshot

This not only installs the zfs-auto-snapshot tool, but also a handy crontab into /etc/cron.d/zfs-auto-snapshot. I have set up mine as follows:

PATH="/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin"

*/15  * * * * root which zfs-auto-snapshot > /dev/null || exit 0 ; zfs-auto-snapshot --quiet --syslog --label=frequent --keep=4  //
  00  * * * * root which zfs-auto-snapshot > /dev/null || exit 0 ; zfs-auto-snapshot --quiet --syslog --label=hourly   --keep=24 //
  59 23 * * * root which zfs-auto-snapshot > /dev/null || exit 0 ; zfs-auto-snapshot --quiet --syslog --label=daily    --keep=7  //
  59 23 * * 0 root which zfs-auto-snapshot > /dev/null || exit 0 ; zfs-auto-snapshot --quiet --syslog --label=weekly   --keep=4  //
  00 00 1 * * root which zfs-auto-snapshot > /dev/null || exit 0 ; zfs-auto-snapshot --quiet --syslog --label=monthly  --keep=4  //

What this does is it creates one snapshot every 15 minutes, and keeps four of those around. One snapshot every hour, keeping 24 of those. One every day – you get the idea, right?

So now, we have some redundancy and we have snapshots. What more could you need? Oh right! We actually want to do something with out data.

Transmission

Transmission is a wonderful torrent client and I love it dearly. Unfortunately, the version provided via apt is broken, but the version provided via source creates config files in a weird place and lacks systemd integration. So the easiest solution is to first install transmission via apt, then compile it from source and install over it.

Oh yeah, before we continue, allow me to explain why Transmission 2.92 is broken. Basically, whenever you attempt to add a magnet link, it completely shits itself and everything stops working. Luckily, Transmission 3.0 removed that feature.

sudo apt install transmission-daemon

This will install transmission-daemon, as well as transmission-remote, which is used to control the daemon. Now, let’s download, compile and install Transmission 3.0:

sudo apt-get install build-essential automake autoconf libtool pkg-config intltool libcurl4-openssl-dev libglib2.0-dev libevent-dev libminiupnpc-dev
wget 'https://github.com/transmission/transmission-releases/raw/master/transmission-3.00.tar.xz'
tar -xvf transmission-3.00.tar.xz
cd transmission-3.00/
./configure --without-gtk # Even though the documentation says it's --disable-gtk, it's actually --without-gtk
make && sudo make install

Compilation and installation was fast and painless, so let’s have a look at the config file in /etc/transmission-daemon/settings.json. The following lines were changed:

"download-dir": "/media/data/Private/Torrents",
"lpd-enabled": true,
"port-forwarding-enabled": true,
"rpc-host-whitelist": "servername servername.local",
"rpc-whitelist": "127.0.0.1 192.168.1.*",
  • “download-dir” is pretty self-explanatory. It can be set per-torrent, but I prefer them all in one handy spot on my new ZFS pool.
  • “lpd-enabled” enables “Local Peer Discovery”. Essentially, every 4 minutes, the daemon broadcasts into the local network every torrent that is currently active. This way, I could easily transfer torrent data from my old server to my new one.
  • “port-forwarding-enabled” enables port forwarding. Shocking! I couldn’t be bothered with doing that on my router.
  • “rpc-host-whitelist” is a list of names your server is reachable under. Obviously, substitute servername with the actual hostname of your server.
  • “rpc-whitelist” is pretty self-explanatory. Only localhost and the internal network may connect to the daemon via RPC.

And that takes care of transmission. Fun fact: You can simply add a hash to transmission and it’ll download the corresponding torrent.

WebDAV, oh God!

We have data, and we want to get them to the masses. Since I have a very heterogenous landscape of devices, I want something more-or-less OS agnostic. WebDAV immediately came to mind, as it takes care of all escaping of weird characters on the HTTP level, and there’s WebDAV clients for everything out there.

So since I’m already using nginx, I thought it would be as simple as creating a new site, adding some WebDAV-related stuff in the config and be done with it.

OR.

SO.

I.

THOUGHT.

The truth is, nginx’ WebDAV support is horribly broken. The default module doesn’t work with any WebDAV client out there. I ended up copying a russian tutorial and got close to getting it to work, but in the end, Windows still wouldn’t work with it. I just couldn’t get the locking instruction to work, no matter how much I tried.

Even after modifying the registry to not lock a file after creation, the WebDAV server would just randomly become unavailable, for no apparent reason! I looked at the logs, both on Windows and the server, but saw no reason why it wouldn’t work. After compiling nginx from source and adding random modules in hopes of making it work, I decided that WebDAV was cursed and should never have existed.

It was probably easier to just rename the files that cause issues on Windows whenever I notice them.

Samba

Finally, back to good ol’ Samba. Since I just want to access files locally and I am the only one who accesses them - ouch - I don’t need to implement any authentication either. Installing samba on Ubuntu Server is pretty easy:

sudo apt install samba
sudo mkdir /var/smb
sudo chown mechmk1:mechmk1 /var/smb
ln -s /media/data /var/smb/Data

The configuration file is located in /etc/samba/smb.conf. Here is the contents:

[global]
workgroup = WORKGROUP
server string = Samba Server %v
netbios name = ubuntu
security = user
map to guest = bad user
dns proxy = no
allow insecure wide links = yes

#============================ Share Definitions ==============================

[User Share]
path = /var/smb
read only = no
follow symlinks = yes  # This enables the symlink /var/smb/Data => /media/data
wide links = yes       # In all honesty, I have no idea what this is for
force user = mechmk1   # Locally, smbd acts as if every read and write was from UID 1001
browsable =yes
writable = yes
guest ok = yes

And wouldn’t you know it, it just works! Now, onwards to Jellyfin!

Jellyfin

So, Jellyfin is a multi-media server with a lovely web UI and quite a handful of clients for different operating systems. While it’s not perfect, it’s good enough for me. So the basic idea is that Jellyfin runs on a .NET Core server, and nginx just creates a reverse proxy to it. That way, I can use my existing certifiates, managed via certbot, and have Jellyfin just focus on being Jellyfin.

Installation is pretty simple as well:

sudo apt install apt-transport-https
wget -O - https://repo.jellyfin.org/jellyfin_team.gpg.key | sudo apt-key add -
echo "deb [arch=$( dpkg --print-architecture )] https://repo.jellyfin.org/$( awk -F'=' '/^ID=/{ print $NF }' /etc/os-release ) $( awk -F'=' '/^VERSION_CODENAME=/{ print $NF }' /etc/os-release ) main" | sudo tee /etc/apt/sources.list.d/jellyfin.list
sudo apt update
sudo apt install jellyfin

That’s really all it takes. Jellyfin is now running on port 8096, and nginx can be configured to work as a reverse proxy for it:

server {
	listen 80;
	server_name jellyfin; # Pick whichever hostname works for you
	return 301 https://$host$request_uri; # Not strictly necessary due to HSTS but whatever
}

# HTTPS Server
#
# The meat of the package
server {
	# Enable HTTP2 and SSL/TLS
	listen 443 http2 ssl;

	# Only listen to the right server name
	server_name jellyfin;

	# Include Let's Encrypt certificates
	include snippets/letsencrypt.conf;

	# Include security-related headers such as HSTS, CSP, X-Frame-Options, etc.
	include snippets/security-headers.conf;

	# Include SSL settings, such as what ciphers to use and other security-related stuff
	include snippets/ssl-settings.conf;

	# Enable logging on a per-site basis
	access_log /var/log/nginx/jellyfin.access.log;
	error_log /var/log/nginx/jellyfin.error.log;

	############
	# Jellyfin #
	############

	# basically $jellyfin = 127.0.0.1
	set $jellyfin 127.0.0.1;

	# Redirects requests to / to /web/
	location = / {
		return 302 https://$host/web/;
	}

	location / {
		# Proxy main Jellyfin traffic
		proxy_pass http://$jellyfin:8096;
		proxy_set_header Host $host;
		proxy_set_header X-Real-IP $remote_addr;
		proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
		proxy_set_header X-Forwarded-Proto $scheme;
		proxy_set_header X-Forwarded-Protocol $scheme;
		proxy_set_header X-Forwarded-Host $http_host;

		# Disable buffering when the nginx proxy gets very resource heavy upon streaming
		proxy_buffering off;
	}

	# location block for /web - This is purely for aesthetics so /web/#!/ works instead of having to go to /web/index.html/#!/
	location = /web/ {
		# Proxy main Jellyfin traffic
		proxy_pass http://$jellyfin:8096/web/index.html;
		proxy_set_header Host $host;
		proxy_set_header X-Real-IP $remote_addr;
		proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
		proxy_set_header X-Forwarded-Proto $scheme;
		proxy_set_header X-Forwarded-Protocol $scheme;
		proxy_set_header X-Forwarded-Host $http_host;
	}

	location /socket {
		# Proxy Jellyfin Websockets traffic
		proxy_pass http://$jellyfin:8096;
		proxy_http_version 1.1;
		proxy_set_header Upgrade $http_upgrade;
		proxy_set_header Connection "upgrade";
		proxy_set_header Host $host;
		proxy_set_header X-Real-IP $remote_addr;
		proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
		proxy_set_header X-Forwarded-Proto $scheme;
		proxy_set_header X-Forwarded-Protocol $scheme;
		proxy_set_header X-Forwarded-Host $http_host;
	}
}

That’s enough to get Jellyfin up and running!

TeamSpeak and systemd

“TeamSpeak? What is a TeamSpeak?” - it’s basically millenial Discord, except that the server runs on my machine and I have control over it. Installation of TeamSpeak works in two steps:

  1. Installing and configuring TeamSpeak
  2. Setting up systemd

TeamSpeak Installation

Since TeamSpeak doesn’t have a nice Debian package, we have to get the file directly from the vendor.

# Note that this downloads version 3.13.6, which was the latest release at the time
sudo wget -O /opt/teamspeak3-server.tar.bz2 'https://files.teamspeak-services.com/releases/server/3.13.6/teamspeak3-server_linux_amd64-3.13.6.tar.bz2'
# Create a new user called "teamspeak3", which can't login and only exists to run teamspeak3
sudo adduser --system --shell /usr/sbin/nologin --no-create-home --disabled-password --disabled-login teamspeak3
cd /opt
sudo tar -xvf teamspeak3-server.tar.bz2
sudo mv 'teamspeak3-server_linux_amd64' 'teamspeak3-server'
# This accepts the TeamSpeak license agreement
sudo touch 'teamspeak3-server/.ts3server_license_accepted'
sudo chown -R teamspeak3 teamspeak3-server

This is enough to install TeamSpeak. You can try it out by running ./ts3server and checking if you can connect to it. If you already have a configuration that works for you, make sure to copy ts3server.sqlitedb* to /opt/teamspeak3-server and your configuration is migrated as well.

TeamSpeak meet systemd

To get systemd to work with teamspeak, all we need to do is create a service file in /etc/systemd/system/teamspeak3.service. These are its contents:

[Unit]
Description=TeamSpeak3 Server
Wants=network-online.target
After=syslog.target network.target

[Service]
WorkingDirectory=/opt/teamspeak3-server
User=teamspeak3
Type=forking
ExecStart=/opt/teamspeak3-server/ts3server_startscript.sh start
ExecStop=/opt/teamspeak3-server/ts3server_startscript.sh stop
ExecReload=/opt/teamspeak3-server/ts3server_startscript.sh reload
PIDFile=/opt/teamspeak3-server/ts3server.pid
Restart=on-failure
RestartSec=5s

[Install]
WantedBy=multi-user.target

It’s pretty self-explanatory. Once online, the server will start itself using the provided runscript. Should the server die for some reason, it’ll restart itself after 5 seconds.

Final Words

Well, it’s been quite a journey. I’m writing this down more or less so I have some reference, should I encounter problems in the future, just so I know what I did :D