Posts with the tag sysadmin:
Tailscale is a service based on WireGuard that lets one’s devices form a peer-to-peer private network in a easy and seamless manner.
I have been using it for over a year now, so I can now do a quick review on how I use the service on a day-to-day basis.
Setup Although it is possible to set up WireGuard manually to connect devices, it gets harder when peers are behind NAT.
This happened a while ago. After applying some updates to my LXD containers and restarted everything, I was surprised to discover that all my websites depending on MariaDB weren’t able to connect to the server.
I talked a while ago about the monitoring stack I was using: TIG. I have moved to something else for the past year, so I wanted to talk about my experience with it.
This post won’t be a tutorial, for two reasons. First of all, it’s way too complex to be contained in single post, and secondly I have set up the whole stack using Ansible roles I made myself.
When I moved to restic as the tool to backup my servers, one of the reason I chose it was because it supports S3 backends. I was using Wasabi at the time.
However Wasabi deemed to be unreliable and their billing model was not adapted to this usage. Wasabi bills each object for a least 3 months. restic rewrites chunks all the time, so you end up paying much more than what you actually store.
Disclaimer: I wrote this before being employed at Scaleway.
Wasabi has a very attractive pricing model at $0.0059 per GB/mo ($5.99 per TB/month) and no egress fees, which makes it by far the cheapest object storage provider on the market. And the performance is pretty good. But there is more to it.
Two reasons you should not use Wasabi One little detail in the pricing model is that you’re billed at least 90 days for every object you upload.
At my previous work we’ve been using Capistrano for years and it’s not particularly great. It does work well, but it really feels like messy Ruby scripting with messy documentation.
We use Capistrano for a specific kind of projects which are usually PHP or Node deployed on multiple non-autoscaled webservers.
Recently for school I had two Laravel projects to make. One of the requirements was to deploy them on a real webserver with a valid domain name, HTTPS, etc.
I’ve been using Ghost with SQLite for a year and a half and I haven’t had any issues related to SQLite at all. I would even say this is a very good choice for Ghost and I realize now that I’ve made a post about it.
I want to switch back to MySQL because I feel more confident using it. Especially for backups.
My current setup is a LXD host server with a bunch of containers managed by Ansible.
Ever since I started this blog I have been using Nginx as a reverse proxy for Ghost.
Ghost is in a kind of weird place between real static and headless CMS like Hugo or Jekyll and fully-fledged CMS like WordPress. Ghost is a Node.js program based on Express that binds to a port and listen to HTTP requests, so it’s not deployable like a static website with only static files.
IPv6 works out of the box on Hetzner Cloud VMs. However, once you start adding IPv6 to an existing or new interface, IPv6 connectivity completely falls apart.
I started to notice this a year and half ago, when I was trying to setup LXD. With the help of a fellow sysadmin, we tried to hunt down the issue, without success, but found a dirty workaround.
Then it was reported again on my openvpn-install script, and more recently on my wireguard-install script, so I decided to dive into this issue once more.
5 months ago I wrote about the corruptions issues that I had on my main Hetzner Cloud VM, which is running a lot of stuff including a high traffic Mastodon instance.
For a bit of context, this VM is running on Ubuntu 18.04, and has a single disk. This disk is partitioned between a tiny / ext4 partition and a another one for a ZFS pool. This zpool is used as a storage pool for for LXD and thus all my LXC containers running on it.