So, I’ve recently acquired a new server - coincidentally, the one that this is running on, and I’ve run into a handful of…problems. I’m writing this both to get some peace of mind, and to hopefully help others, who may be in a similar situation. I’ll go through my specs, the build process, my thoughts and what I did to fix all the issues.
As with every good build, we’ll need some specs to begin with. I’ll be honest with you, this shit probably makes no sense to anyone who actually knows their stuff, but I don’t pride myself in knowing what I am doing. I pride myself in being able to act as if I knew what I was doing - that’s a vital difference. Anyways, onwards to the specs:
- CPU: Intel i3-10100F
- Memory: Corsair Vengeance LPX 2x8GB
- Motherboard: ASRock H510M-ITX/AC mITX
- Case: Fractal Design Node 304
- PSU: Corsair CX550M
- Cooler: Arctic Alpine 12 Passive
- Samsung 970 EVO Plus SSD 2TB
- 4x Western Digital WD Red 4TB (WD40EFAX)
So hold up…an i3 for a server? 16 GB of RAM? PASSIVE COOLING?! You may ask yourself what the fuck I am on, and I wish I could tell you that it was good drugs, but unfortunately, it was only boredom. Allow me to explain the rationale behind each choice. The i3 is a huge step up from my old CPU, which was, I kid you not, an Intel Celeron J3455. Now that the laughter has died down a bit, allow me to explain that this server is really here to serve my music collection to myself, teamspeak3 for my friends (or, “friend” rather. I’m not very popular) and some data storage for whatever I have to store. So an i3 really does what I need it to do. As for the passive cooling…it really seems enough for 65W TDP. It’s not like it’s a gaming rig or such, where you have to expect your CPU to be on max power for hours on end (I feel like this “no friends” thing starts to make more and more sense). Usually, my server is pretty idle, so less power is fine. Server never got hotter than 70°C and that was on the SSD.
The RAM also seems to be fine. Never had more than 80% allocation, let alone use. So I might upgrade to 2x16 at some point, but not now. I really see no reason to.
The Case - Node 304
So, I generally like the Node 304, but it’s the little things that bother me. For example, the manual is a fucking joke. They spend more time explaining how much they love minimalist design, like other scandinavian companies, instead of explaining how the fuck they want me to wire up the case fans. In case you wonder: Attach the 4-pin Molex connector of your PSU to the Molex receptor of the case fan control, then connect the three case fans to the fan control pins.
Aside from that, the case generally is nice. The case has a 140mm fan in the back and 2x 90mm fans in the front. All of them can be set to three different speeds with a physical switch in the back. That’s pretty neat!
The case also comes with three braces to mount HDDs with, but realistically, you will have to remove one of them, if you plan on installing a GPU or if, like me, your cables are not wireless and you actually need space to wire everything. But four HDD slots are plenty, and my motherboard doesn’t have more than four SATA connectors anyways, so that’s fine.
All in all, it looks aesthetically pleasing and keeps the insides of my hardware inside. What more could I ask for?
This isn’t a “how to build a PC” post, although I could make one of those if I felt like it. Generally, the build was fairly unimpressive. It begins by disassembling the case with the three (or four?) thumb screws on the back. The black top slides right off and reveals the inside of the case. The PSU is actually mounted in the center of the case, and an extension cable is run inside the case from the edge. This is pretty clever and saves some space.
After the PSU is installed, I took the motherboard out of its packaging and installed the CPU, Passive Cooler and the SSD. I don’t know why, but I fucking love M.2 SSDs! This would come to bite me later.
Then, the HDDs were installed. They’re mounted sideways, so that the “top” of each HDD faces each other. I decided to label them 0, 1, 2 and 3 respectively, and connect them to SATA_0, SATA_1, etc. respectively. That way, if one HDD fails, I can tell which one by number. This truly is big-brain time.
Then it was time to insert the motherboard in the case, which was pretty straightforward. Oh yes, lest I forget, none of the screws were labelled. It was pretty much up to me to figure out which one belongs where. Good job! Anyways, once inserted, I attached the RAM. To be fair, the passive cooler made that pretty fucking difficult, but pain only makes you stronger. The CPU- and Motherboard power connectors were much easier to insert than I originally thought. Finally then the USB case headers and system pins (which you should not insert into the COM1 pins, which look identical - ask me how I know) and we’re ready to power on.
And as you may expect: Power on, no image. None.
Time to Panic
The worst thing to happen when you build a PC, and it has happened to me every single time I built a PC. At least the fans and HDDs were spinning and the system power LED was glowing. So at least some signs of life.
So I did what any sane person would do and immediately got into a phonecall with a colleague of mine to get some technical and emotional support. We went through all the normal steps you’d go through when checking a PC in this state:
- Are all cables inserted correctly?
- Is the RAM working?
- Different HDMI cable
So yeah, nothing worked. Luckily, Journeyman Geek worked out that the F-series CPUs of Intel don’t have any video output at all. Not just “no Intel HD graphics” or such - literally nothing. No BIOS, no terminal, nothing. You might as well put a DisplayPort cable up your ass and wonder why you can’t do a DIY colonoscopy on your TV.
But my server didn’t need any graphics output once it works. All I needed was an SSH server, so I had to get Ubuntu Server running, setup SSH and then I could do the rest remotely. I decided the best course of action would be to insert my M.2 SSD into my gaming PC, then boot from USB, install Ubuntu Server on the SSD, then switch back. It worked fine, but when I booted the server…no network connection? Why not?!
No Network for my Intel i219-V
I didn’t get any network connection going, so I decided that I had to see it on the system, so plan B was executed. Remove the GTX 1080 from my gaming PC and insert it into my server. And by “insert” I mean “push every fucking cable aside to somehow get space”. It worked, and I finally had some graphical output.
So I checked the network devices with
ip a and lo and behold, this was the output:
As you see, my ethernet device isn’t even listed.
lshw -C network showed that it was there, but “unclaimed”. I tried many many things, but ultimately, it turned out that the Linux kernel 5.4 shipped with Ubuntu Server 20.04 isn’t configured for this device. This can be fixed by installing the HWE Kernel with the following command:
But wait, how can you do that with no network? Luckily, I could use the
wlp2s0 Wi-Fi device to connect to my AP and get some network going that way. If you ask yourself why a server distro would contain drivers for a Wi-Fi card, but not an ethernet card…your guess is as good as mine.
So now, let me tell you about the backup strategy I had with my old server. Basically, I had two HDDs with 6 TB storage, and whenever I felt like it, I would connect the backup HDD to my server, then remount my primary disk as read-only (
sudo mount -o remount,ro /media/Primary in case you wondered) and created a SquashFS image of my disk, then wrote it to the backup disk. This process took about a night, but the neat thing is that even if it took longer, I could still listen to my music while the server was backing up.
But as you can imagine, “whenever I feel like it” is not really a good backup concept, so I wanted something better. Journeyman Geek, the guy who helped me identifying that my CPU really didn’t like visual output, suggested I should use ZFS. So at first, this seemed overwhelming, as all the guides and explanations included like 20 disks - way overkill for what I had. But after a while, I managed to get it.
ZFS Basic Structure
ZFS is actually remarkably simple, once you understand the basics. So basically, ZFS defines one top-level structure, which is the “zpool”. A zpool is simply a collection of devices, not unlike a JBOD RAID. You can mount your zpool to a mount point, write to it, and the zpool writes it to one of the disks. In theory, if you desired, you could create a zpool for one physical disk and just use the neat administration features ZFS offers.
But you’re not a sucker! You want redundancy, performance and all the other amazing buzzwords that sysadmins get erections over. So a zpool by itself doesn’t offer any redundancy. Seriously, none. If one disk dies, then it’s dead and you’re fucked. How exactly, I don’t know. In the best case, some data is gone. In the worst case, all data is gone. So let’s add some redundancy, right?
This is where vdevs - virtual devices - come into play. So it took me a while to understand what a vdev is, and the fact that the Arch Wiki uses the phrase completely wrong doesn’t help. Basically, a vdev is one virtual device, comprising of one or more data sources. These data sources are usually physical disks, but you can also use image files, which the Arch Wiki mistakingly labelled “vdev”.
If none of this shit makes any sense, here is the output of
zpool status and all will become clear:
So all of this seems pretty complicated, so let me explain from the bottom up:
pci-000:00:17.0-ata-* are the physical disks. It’s recommended not to use labels like
/dev/sdX, as these may not be predictable. Meaning, that what was
/dev/sda in this boot could be
/dev/sdc next boot. While this may seem unlikely, using physical paths makes this entire system more predictable. I also labelled all the cables and drive bays with 1-4, just to make sure I don’t remove the wrong disks.
mirror-* are my vdevs. As you can see, I have two vdevs, with two disks each. “Mirror” is essentially a RAID 1 setup.
data is my zpool. When data is written to the zpool, it’s written on any of the available vdevs. Or maybe it’s striped like in RAID 0? Honestly, I don’t know.
The important bit is that there is no redundancy on the zpool level. If one of your vdevs fails, then you are fucked. I know I said that before, but it’s really important! So this setup is quite useful, as it’s basically a RAID 10 setup, meaning that any disk can fail and can be replaced and replicated (called “resilvered” in ZFS lingo) rather quickly.
But mirroring isn’t the only way you can set up your vdevs. There is also plain striping (aka. Your physical disk becomes your vdev), raidz1, raidz2, raidz3 and probably some more. This beautiful blog post on ZFS Performance gives a recommendation on which mode to use for how many disks. So a mirror/mirror setup for 4 disks is not a bad idea. The downside of this redundancy is that you only have 50% storage efficiency. Oh well, can’t have everything.
Get on with it!
Oh, right! The commands! So anyways, here is how I created my zpool:
This generates a zpool called “data”, with the previously mentioned RAID 10-esque characteristics. The
-o ashift=12 modifier is used to enforce 4k sector size, since I know that’s what my disks use.
-m /media/data sets - can you guess? - the mount point to /media/data, meaning that my zpool will be mounted there on every boot. Neat!
Now, in theory, you can now go to /media/data and just do as you please. But there is one more ace up our sleeves: Datasets. Basically, a dataset is a fancy directory, which you can back up individually. I decided to create one dataset for my media, one for private data (think “random bullshit I download when I am bored”). Creation works as follows:
Amazing, I know! That’s really all it took. Now two directories exist in /media/data, each with appropriate owners.
One amazing thing about ZFS is that you can create snapshots, which are in a way comparable to git commits. You can roll back to a previous snapshot quite easily. But there is one troubling thing: Snapshots need to have a unique name, and they don’t auto-increment. Luckily, there is an amazing tool called
This not only installs the
zfs-auto-snapshot tool, but also a handy crontab into
/etc/cron.d/zfs-auto-snapshot. I have set up mine as follows:
What this does is it creates one snapshot every 15 minutes, and keeps four of those around. One snapshot every hour, keeping 24 of those. One every day – you get the idea, right?
So now, we have some redundancy and we have snapshots. What more could you need? Oh right! We actually want to do something with out data.
Transmission is a wonderful torrent client and I love it dearly. Unfortunately, the version provided via apt is broken, but the version provided via source creates config files in a weird place and lacks systemd integration. So the easiest solution is to first install transmission via apt, then compile it from source and install over it.
Oh yeah, before we continue, allow me to explain why Transmission 2.92 is broken. Basically, whenever you attempt to add a magnet link, it completely shits itself and everything stops working. Luckily, Transmission 3.0 removed that feature.
This will install
transmission-daemon, as well as
transmission-remote, which is used to control the daemon. Now, let’s download, compile and install Transmission 3.0:
Compilation and installation was fast and painless, so let’s have a look at the config file in
/etc/transmission-daemon/settings.json. The following lines were changed:
- “download-dir” is pretty self-explanatory. It can be set per-torrent, but I prefer them all in one handy spot on my new ZFS pool.
- “lpd-enabled” enables “Local Peer Discovery”. Essentially, every 4 minutes, the daemon broadcasts into the local network every torrent that is currently active. This way, I could easily transfer torrent data from my old server to my new one.
- “port-forwarding-enabled” enables port forwarding. Shocking! I couldn’t be bothered with doing that on my router.
- “rpc-host-whitelist” is a list of names your server is reachable under. Obviously, substitute servername with the actual hostname of your server.
- “rpc-whitelist” is pretty self-explanatory. Only localhost and the internal network may connect to the daemon via RPC.
And that takes care of transmission. Fun fact: You can simply add a hash to transmission and it’ll download the corresponding torrent.
WebDAV, oh God!
We have data, and we want to get them to the masses. Since I have a very heterogenous landscape of devices, I want something more-or-less OS agnostic. WebDAV immediately came to mind, as it takes care of all escaping of weird characters on the HTTP level, and there’s WebDAV clients for everything out there.
So since I’m already using nginx, I thought it would be as simple as creating a new site, adding some WebDAV-related stuff in the config and be done with it.
The truth is, nginx’ WebDAV support is horribly broken. The default module doesn’t work with any WebDAV client out there. I ended up copying a russian tutorial and got close to getting it to work, but in the end, Windows still wouldn’t work with it. I just couldn’t get the locking instruction to work, no matter how much I tried.
Even after modifying the registry to not lock a file after creation, the WebDAV server would just randomly become unavailable, for no apparent reason! I looked at the logs, both on Windows and the server, but saw no reason why it wouldn’t work. After compiling nginx from source and adding random modules in hopes of making it work, I decided that WebDAV was cursed and should never have existed.
It was probably easier to just rename the files that cause issues on Windows whenever I notice them.
Finally, back to good ol’ Samba. Since I just want to access files locally and I am the only one who accesses them - ouch - I don’t need to implement any authentication either. Installing samba on Ubuntu Server is pretty easy:
The configuration file is located in
/etc/samba/smb.conf. Here is the contents:
And wouldn’t you know it, it just works! Now, onwards to Jellyfin!
So, Jellyfin is a multi-media server with a lovely web UI and quite a handful of clients for different operating systems. While it’s not perfect, it’s good enough for me. So the basic idea is that Jellyfin runs on a .NET Core server, and nginx just creates a reverse proxy to it. That way, I can use my existing certifiates, managed via certbot, and have Jellyfin just focus on being Jellyfin.
Installation is pretty simple as well:
That’s really all it takes. Jellyfin is now running on port 8096, and nginx can be configured to work as a reverse proxy for it:
That’s enough to get Jellyfin up and running!
TeamSpeak and systemd
“TeamSpeak? What is a TeamSpeak?” - it’s basically millenial Discord, except that the server runs on my machine and I have control over it. Installation of TeamSpeak works in two steps:
- Installing and configuring TeamSpeak
- Setting up systemd
Since TeamSpeak doesn’t have a nice Debian package, we have to get the file directly from the vendor.
This is enough to install TeamSpeak. You can try it out by running
./ts3server and checking if you can connect to it. If you already have a configuration that works for you, make sure to copy
/opt/teamspeak3-server and your configuration is migrated as well.
TeamSpeak meet systemd
To get systemd to work with teamspeak, all we need to do is create a service file in
/etc/systemd/system/teamspeak3.service. These are its contents:
It’s pretty self-explanatory. Once online, the server will start itself using the provided runscript. Should the server die for some reason, it’ll restart itself after 5 seconds.
Well, it’s been quite a journey. I’m writing this down more or less so I have some reference, should I encounter problems in the future, just so I know what I did :D